The world’s biggest tech companies have devoted huge resources to voice assistants such as Siri or Alexa. Yet despite a user base numbering in the millions, these apps have serious flaws as researchers at Zhejiang University, China, recently showed. They found a gaping vulnerability that can be easily exploited by hackers who only need to send ultrasound commands to the voice assistant to gain access to personal information.
This is a very sneaky exploit since a hacker can take command of your handheld device standing right next to you. You’ll never notice since the voice commands are ‘whispered’ in ultrasounds, whose frequencies are above the human audible range (20Hz to 20kHz).
Although we can’t hear this mosquito squeal, the software’s voice command software is perfectly capable of picking the ultrasound frequencies which it decodes as instructions for the device.
The Zhejiang researchers showed that this exploit aptly called DolphinAttack can be used to send commands to popular devices from Apple, Google, Amazon, Microsoft, Samsung, and Huawei. They transmitted the attack using a common smartphone with $3-woth of additional hardware — a microphone and an amplifier.
Mark Wilson, writing for Fast Company, described what happened next:
The researchers didn’t just activate basic commands like “Hey Siri” or “Okay Google,” though. They could also tell an iPhone to “call 1234567890” or tell an iPad to FaceTime the number. They could force a Macbook or a Nexus 7 to open a malicious website. They could order an Amazon Echo to “open the backdoor.” Even an Audi Q3 could have its navigation system redirected to a new location.
“Inaudible voice commands question the common design assumption that adversaries may at most try to manipulate a [voice assistant] vocally and can be detected by an alert user,” the research team writes in a paper just accepted to the ACM Conference on Computer and Communications Security.
The transmitter had to be as close as only a couple inches to some devices for the exploit to work, it has to be said, though others like the Apple Watch were vulnerable within several feet. Even so, a hacker would simply need to stand right next to a vulnerable device in a crowd or public transit to get it to open malware.
At this point, some readers might be wondering why manufacturers don’t simply stick to the audible range. The problem is that that would come at the cost of sacrificing performance and user experience, due to filtering algorithms which use harmonic content outside the human range of hearing. Moreover, manufacturers use different microphones, most of which are designed to transduce pressure waves in electricity. This means it’s mechanically impossible to block ultrasounds from the hardware.
It’s all up to Google, Amazon, Apple, and the likes to decide how they’ll address this vulnerability.
Meanwhile, the best thing you can do to keep your device safe is to turn off ‘always-on’ listening, which is typically turned on by default. Otherwise, a hacker might just be able to send commands via DolphinAttack even when the device is locked.