You’re not imagining it. Smart speakers inadvertently listen to you all the time.
How often do Alexa and her ilk listen to your conversations? Maybe more than you think. According to a new report from Northeastern University, smart speakers accidentally activate as many as 19 times a day, recording as much as 43 seconds of audio each time.
There’s plenty of anecdotal evidence that voice-activated assistants aren’t perfect and will wake up accidentally and start recording if they think you’ve said their trigger word. That’s not great if you’re a privacy-minded smart speaker owner (this could be an oxymoron), especially since the companies that make these devices hire contractors and employees to listen to small snippets of recordings. But Northeastern’s Mon(IoT)r Research Group wanted to quantify for the first time how often these activations happen and what the devices hear when they do.
“The anecdotes aren’t wrong,” David Choffnes, an associate professor in computer science at Northeastern who worked on the report, told Recode.
Researchers tested five types of speakers: a first-generation Google Home Mini, a first-generation Apple HomePod, Microsoft’s Harman Kardon Invoke, and the second- and third-generation Amazon Echo Dots. For the experiment, they forced the speakers to binge-listen to several 125-hour cycles of television shows including Gilmore Girls, The West Wing, Big Bang Theory, and Narcos while they monitored if, when, and how the devices were accidentally triggered.
The good news is they didn’t find any evidence that the devices were constantly recording conversations, and about half of the accidental activations lasted less than six seconds.
Here’s the bad news: About half of the accidental recordings lasted six seconds or longer, with some recording as much as 43 seconds of audio without the hypothetical user’s permission or, possibly, knowledge. Choffnes said these long activations were rare, but they did happen.
That said, the test environment did not fully replicate the environment in which these speakers are used; the audio comes from television show dialogue, which is not always representative of how humans talk. The audio also comes from a speaker rather than a human mouth, so the voice assistants’ actions aren’t necessarily representative of a real-life situation.
As you can guess, the devices were usually activated when a word similar to the trigger word was spoken: “I can work” instead of “OK Google,” “congresswoman” instead of “Alexa,” “he clearly” instead of “Siri,” and “Colorado” instead of “Cortana.” While some speakers were better than others, they were all prone to accidental activations, ranging from an average of 1.5 to 19 times per day. The Apple and Microsoft devices activated more often than the others.
While this study’s findings are recent, the news that these speakers can be activated accidentally and record random conversations is not. If you have a smart speaker in your house, it has surely happened to you.
What you might not know is that, intentional activation or not, the recordings are kept on servers and may even be reviewed by human ears. Sounds creepy, but there is a reason: The more data providers get about things like accidental activations, the better they can refine their voice recognition software to prevent them. Apple, Google, Amazon, and Microsoft all do this. The human listeners are not told who they are listening to, but they might hear enough details to connect a voice to a specific person.
The report is part of a still-in-progress larger study, which will also look into what happens to all the data these voice-activated assistants collect. Choffnes hopes consumers will feel more informed and better able to decide for themselves if the privacy risk that comes with voice-activated devices is worth the reward. And that’s assuming consumers even have that option at all. As he points out, it’s increasingly impossible to avoid smart devices, and he doesn’t think the burden should be on consumers to do so in the first place.
“They should not be the ones making these decisions,” Choffnes said. “Ultimately, we should actually have regulations to enforce good behavior in terms of protecting personal data.”
By now you must be wondering what your devices are hearing. As it happens, you might be able to hear the recordings for yourself — and delete them and stop others from listening in. Here’s how to take control of your favorite voice assistants.
Amazon gives you the ability to hear your recordings. The instructions are here. You might even get someone else’s recordings, which happened to one man in Germany. You’ll also see how to delete those recordings and opt out of having your audio recordings reviewed by humans. But before you do that, why not give those recordings a listen and tell us what you heard?
Google also lets you review your recordings and delete them if you’re so inclined (but first please drop us a line and tell us about it). Your recordings won’t be reviewed by humans unless you approve it. Google also says that, yes, it may use the things you say when it’s activated to target ads to you. Click here to access Google’s privacy controls to review and delete recordings (and turn off “ad personalization” to opt out of the targeted ads).
You can opt out of having your interactions with Apple’s Siri stored and reviewed through your device’s settings and you can delete recordings; instructions are here. Apple does not appear to give users the option to review their recordings.
Help Open Sourced’s reporting
Has your smart speaker accidentally recorded something you didn’t intend to be recorded? We want to hear what happened! Just fill out this Google form to share your experience.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
Author: Sara Morrison