Smart voices
The microphone is one of the most useful modern inventions. Initially, the technology was used to record human speech or songs and enabled telecommunication between people. However, thanks to recent advances in computing, it’s now possible to use microphones to control smart devices in and around our houses. You can have rich interactions with voice-enabled devices and send vocal commands to search things online, play a certain podcast, or even adjust your home’s thermostat. Microphones are so ubiquitous nowadays, it’s almost ridiculous. They’re not only in devices we carry around with us all the time such as phones, tablets, watches, and headphones, but also in remote controls, speakers, cars, and even in toys and household appliances.
In fact, maybe it is ridiculous.
While there’s no denying these microphone-enabled devices are useful, opaque communication protocols raise important question marks as to how all of this audio data is stored and used. Many people are aware that audio recordings can be used for tracking, consumer behavior profiling, and serving targetted advertising. But there’s much more you can do with just a few samples of a person’s speech — and some applications are more nefarious.
By tuning into your voice, AI tools can infer personality traits, moods and emotions, age and gender, drug use, native language, socioeconomic status, mental and physical state, and a range of other features with fairly high accuracy. If a human can spot these things from a person’s voice, so can an automated system. In some instances, you don’t even need a mic. Researchers have shown that just by using a phone’s accelerometer data, it is possible to reconstruct ambient speech, which can later be used for various purposes from customer profiling to unauthorized surveillance.
No one is saying that tech giants or state entities are doing this, but the fact that they could is backed up by studies and evidence from “ethical hackers”. These are important privacy concerns — and most people aren’t aware of them, according to a new study conducted by researchers in Germany.
The researchers led by Jacob Leon Kröger conducted a nationally representative survey on 683 individuals in the UK to see how aware they were of the inferential power of voice and speech analysis. Only 18.7% of participants were at least “somewhat aware” that information pertaining to an individual’s physical and mental health can be gleaned from voice recordings. Nearly 42.5% didn’t even think that such a thing was ever possible. Even among participants with experience in computer science, data mining, and IT security, their level of awareness of what kind of information can be inferred from their vocal recordings was astonishingly low.
After the survey, each participant watched a brief educational video explaining how vocal analysis can expose potentially sensitive personal information. But even after watching the video, the participants only expressed “moderate” privacy concerns, although most expressed a lower intention to use voice-enabled devices than before embarking on the survey.
It’s not like the participants didn’t care at all about their privacy though. “An analysis of open text responses, unconcerned reactions seem to be largely explained by knowledge gaps about possible data misuses,” the researchers wrote in their study that appeared in the journal Proceedings on Privacy Enhancing Technologies.
A lot of apps ask for access to your microphone and just like we all often agree to a 5,000-word terms and conditions document without reading it, most voluntarily bug their phone or home. The German researchers found it striking that many participants did not offer a solid justification for their reported lack of privacy concern, which points to misconceptions and false senses of security.
“In discussing the regulatory implications of our findings, we challenge the notion of “informed consent” to data processing. We also argue that inferences about individuals need to be legally recognized as personal data and protected accordingly,” the authors wrote.
“To prevent consent from being used as a loophole to excessively reap data from unwitting individuals, alternative and complementary technical, organizational, and regulatory safeguards urgently need to be developed. At the very least, inferred information relating to an individual should be classified as personal data by law, subject to corresponding protections and transparency rights,” they added.