Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light

Hackers Can Silently Control Your Google Home, Alexa, Siri With Laser Light 

hacking voice controllable devices with laser light
A team of cybersecurity researchers has discovered a clever technique to remotely inject inaudible and invisible commands into voice-controlled devices — all just by shining a laser at the targeted device instead of using spoken words.

Dubbed 'Light Commands,' the hack relies on a vulnerability in MEMS microphones embedded in widely-used popular voice-controllable systems that unintentionally respond to light as if it were sound.

According to experiments done by a team of researchers from Japanese and Michigan Universities, a remote attacker standing at a distance of several meters away from a device can covertly trigger the attack by simply modulating the amplitude of laser light to produce an acoustic pressure wave.

"By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio," the researchers said in their paper [PDF].

Doesn't this sound creepy? Now read this part carefully…

Smart voice assistants in your phones, tablets, and other smart devices, such as Google Home and Nest Cam IQ, Amazon Alexa and Echo, Facebook Portal, Apple Siri devices, are all vulnerable to this new light-based signal injection attack.

"As such, any system that uses MEMS microphones and acts on this data without additional user confirmation might be vulnerable," the researchers said.

Since the technique ultimately allows attackers to inject commands as a legitimate user, the impact of such an attack can be evaluated based on the level of access your voice assistants have over other connected devices or services.

Therefore, with the light commands attack, the attackers can also hijack any digital smart systems attached to the targeted voice-controlled assistants, for example:
  • Control smart home switches,
  • Open smart garage doors,
  • Make online purchases,
  • Remotely unlock and start certain vehicles,
  • Open smart locks by stealthily brute-forcing the user's PIN number.
As shown in the video demonstration listed below: In one of their experiments, researchers simply injected "OK Google, open the garage door" command to a Google Home by shooting a laser beam at Google Home that was connected to it and successfully opened a garage door.


In a second experiment, the researchers successfully issued the same command, but this time from a separate building, about 230 feet away from the targeted Google Home device through a glass window.


Besides longer-range devices, researchers were also able to test their attacks against a variety of smartphone devices that use voice assistants, including iPhone XR, Samsung Galaxy S9, and Google Pixel 2, but they work only at short distances.

The maximum range for this attack depends upon the power of the laser, the intensity of the light, and of course, your aiming capabilities. Besides this, physical barriers (e.g., windows) and the absorption of ultrasonic waves in the air can further reduce the range of the attack.

Moreover, in cases where speech recognition is enabled, attackers can defeat the speaker authentication feature by constructing the recording of desired voice commands from relevant words spoken by the device's legitimate owner.

According to the researchers, these attacks can be mounted "easily and cheaply," using a simple laser pointer (under $20), a laser driver ($339), and a sound amplifier ($28). For their set up, they also used a telephoto lens ($199.95) to focus the laser for long-range attacks.

How can you protect yourself against the light vulnerability in real-life? Software makers should offer users to add an additional layer of authentication before processing commands to mitigate malicious attacks.

For now, the best and common solution is to keep the line of sight of your voice assistant devices physically blocked from the outside and avoid giving it access to things that you don't want someone else to access.

voice activated smart assistant hacking

The team of researchers—Takeshi Sugawara from the Japan's University of Electro-Communications and Mr. Fu, Daniel Genkin, Sara Rampazzi, and Benjamin Cyr from the University of Michigan—also released their findings in a paper [PDF] on Monday.

Comments