Voice Controlled VPA Security

Understanding and Mitigating the Security Risks of Voice-Controlled Third-Party Skills on Amazon Alexa and Google Home


Responsible Disclosure

We have reported all our findings including Voice Squatting Attack and Voice Masquerading Attack to Google and Amazon. Both Amazon and Google acknowledge us as the first one who discovered these vulnerabilities. We are working with Google and Amazon to understand and mitigate such new security risks.

Vendor Response

From Google:
"Nice catch! I've filed a bug based on your report. The panel will evaluate it at the next VRP (Vulnerability Reward Program) panel meeting and we'll update you once we've got more information. All you need to do now is wait. If you don't hear back from us in 2-3 weeks or have additional information about the vulnerability, let us know!"

From Amazon:
"Thank you for reporting the security issues to us, we are able to reproduce them, and we are currently investigating fixes and mitigations. If you could share the mitigations you mention, that would be helpful in our evaluation. We’ll update you again when have further information to share. Thanks."

Dataset

We have collected real-world conversations between users and our Alexa skills through the five skills we published for a month. The dataset includes 21,308 user commands made by 2,699 users with corresponding response from our skills.

Attack Demos

1. Voice Squatting Attack

1.1 Similar invocation name
In this demo, we registered an attack skill "rap game" that has similar invocation name with target skill "rat game" on Alexa. We showed that when user tried to invoke "rat game", attack skill "rap game" will be invoked instead. 

In this demo, we registered an attack skill "intraMatic opener" that has similar invocation name with target skill "Entrematic Opener" on Google Assistant. We showed that when user tried to invoke "Entrematic Opener", attack skill "intraMatic opener"  will be invoked instead. 


1.2 Invocation name with extra words
In these demos, we registered an attack skill “rat game please” that adds courteous word "please" to the target skill "rat game" on Alexa. We showed that when user tried to invoke "rat game" skill by saying "Alexa, open rate game please", attack skill "rat game please" will be invoked instead.

In these demos, we registered an attack skill “mai entrematic opener” that adds extra word "mai" (pronounced same to "my") to the target skill "Entrematic Openeron Google Assistant. We showed that when user tried to invoke "Entrematic Opener" skill by saying "Hey Google, open my entrematic opener", attack skill “mai entrematic opener”  will be invoked instead.


2. Voice Masquerading Attack

2.1 Fake Skill Switch
In this demo, we registered an attack skill that pretends to open target skill "KAYAK" on Alexa when user tried to open it during the interaction with attack skill. This feature is not supported by Alexa, but have been tried by 29% of Alexa users according to our survey study.

In this demo, we registered an attack skill that pretends to open target skill "United" on Google Assistant when user tried to open it during the interaction with attack skill. This feature is not supported by Google Assistant, but have been tried by 27% of Google Assistant users according to our survey study. (The attack skill can simulate system accent by prerecord the content using emulator.)

2.2 Faking Termination
In this demo, we registered an attack skill on Alexa that fakes its termination after game is over and eavesdropping on user's conversation.

In this demo, we registered an attack skill on Google Assistant that fakes its termination after game is over and eavesdropping on user's conversation.

In this demo, we registered an attack skill on Alexa that fakes its termination after game is over and pretends to be system and opens another skill if user requests.

In this demo, we registered an attack skill on Google Assistant that fakes its termination after game is over and pretends to be system and opens another skill if user requests.

In this demo, we registered an attack skill on Alexa that fakes its termination after game is over and recommends another attack skill to the user. Note that this is a legitimate system functionality of Alexa.