What is an abuse risk?


An abuse risk could be seen as a product feature that could cause unexpected damage to a user or platform when leveraged in an unexpected manner. Abuse risks arise when products don't have sufficient protections against its features being used in a malicious way.


For example, the ability to import your contacts into a social network app to see which one of your friends is using the app would be considered a feature. This feature could also become an abuse risk if there isn’t a quota in place on the amount of contact lookups that could be done within this social network app during a given timeframe. Without any restrictions in place, malicious actors could use this feature to build a large database of users for their spam campaigns.


Contrary to security vulnerabilities, where an identified loophole requires a fix, abuse risks can often be inherent to product features. That means that oftentimes, they shouldn’t be stopped fully but products need protections mitigating its exploitation at scale. 


Preventing abuse in the design phase


When we design our products, there are always multiple reviews, where we aim to prevent or mitigate each abuse risk that may exist before they launch. During these reviews, our product abuse, privacy and security experts that work across many different teams within Google, define the threat model for each new product or feature launch to ensure that the product launches with the safest and best user experience. 


Even though new product launches are subjected to multiple reviews, sometimes there are abuse cases that we may have not thought of. Thanks to the collaboration with our security community we can identify and fix these issues before our adversaries get the chance to exploit these abuse risks. 


How we assess abuse risks reports


For any given report submitted to Google’s VRP, we initially triage whether the report is a security vulnerability, a significant abuse risk or non-issue. If a report describes an issue which doesn’t fall under the traditional definition of a security vulnerability, but is still an issue that could potentially harm our users or products, then that report is triaged to our product abuse experts that work within Google’s Trust & Safety Team.


When we decide to not “Accept” a report in our program, the most common reason is that we don’t see the critical severity of the proposed attack scenario. If you see something that we may have missed, then please feel free to respond with a more detailed attack scenario. We read all responses to our bugs, even if they are closed.


The most important thing to consider, when writing the attack scenario for an abuse risk, is to consider how the process would play out and what the overall damage would be to a user or platform. Reports that   don’t have a clear victim or abuse scenario and where the attack only affects the attacker’s own user experience will most likely be out of scope. Please bear in mind, we only consider attacks that are able to scale up, or have privacy consequences to be considered as a significant abuse risk. One-off instances of abuse are not in scope.  Common issues that fall under this category are reports related to SPAM, content or refund abuse. 


In regards to a reward amount, an abuse risk’s impact will be measured against the number of users at risk and the user privacy at issue.  Abuse risks that are highly scalable, and therefore have more potential users affected, will be considered high risk.  Similarly, reports touching on user privacy, meaning that the issue could result in the leak of personal data of users, would also be rated as a higher impact abuse risk based upon the sensitivity of the user data. Overall, each report will be assessed against the given likelihood of a successful attack, plus the impact of a reproducible attack scenario against our users and platforms.