Verify the output of the tools

Bughunters frequently use tools that automate the job of finding vulnerabilities. The tools come in various flavors - automated scanners, fuzzers, or even short proof-of concept scripts. Properly configured tools can be indispensable and quickly identify bugs that would have been very hard to find manually. At the same time, their output is not perfect: it can almost never be taken at face value and must be reviewed and verified by the researcher. In other words, make "trust, but verify" your motto!

While we always put an effort to understand and analyze incoming reports, it is incredibly important for you to verify scan results and provide a realistic attack scenario when submitting vulnerability reports. When working with automated tools, always double-check their output and make sure that the findings are not false positives. As a rule, before sending the report, try to really understand the way in which the vulnerability is supposed to work and how our services could be affected by it - and capture that in your report. This makes it far more likely for your bug to be processed efficiently by increasing the quality of your report.

Commonly reported false positives include:

  • Google servers vulnerable to CRIME, BEAST or POODLE. Some automated scanners incorrectly detect that servers at or other popular Google domains are vulnerable to CRIME, BEAST or POODLE. That's pretty unlikely, as we have various mitigations in place. Read more about it in Commonly reported SSL/TLS vulnerabilities.

  • SQL injection. For example, one automated tool "detected" an SQL vulnerability in this "PHP script":,null,11221133,null,null/*

    The thing is, YouTube does not use PHP. In fact, the supposedly vulnerable location does not exist at all. The tool probably noticed that the string passed in the URL is reflected somewhere in the HTML code - and incorrectly inferred the presence of a security bug. The reporter spent a lot of time writing up the report and explaining the implications of SQL flaws, but neglected to verify the presence of the underlying bug (even by simply visiting the page). It is always heartbreaking to put a lot of effort into a report and then have it rejected, so be sure to confirm the issue first!

    In fact, the stats we gather show that looking for SQL injection issues in core Google products is probably not the best investment of a bughunter's time. In 2014, we received over 80 SQL injection reports, but none of them resulted in reward or credit.

  • XSRF. Some reporters rely on automated heuristics to detect if a HTML form is prone to XSRF. Alas, due to the nature of this vulnerability, it is difficult to write a truly robust automated check, and the output from a scanner will always need to be manually verified - a fact acknowledged by the authors of such tools. We often get reports with a XSRF proof of concept code copied and pasted from an automated report, only to realize that the snippet actually contains an XSRF token... just by a less obvious name.

  • Output from scanners analyzing HTTP headers. Many scanners check HTTP request and response headers - and flag the presence or absence of certain values as a potential security risk. Unfortunately, such tools tend to be extremely noisy: not every cookie needs to be httpOnly, not every resource needs to be served with X-Frame-Options, and not every HTML5 CORS header (e.g. Access-Control-Allow-Origin: *) is a problem, especially if set on a static file that is meant to be accessible publicly.

In other words, once again, “trust but verify”. It's much easier for us to triage your report if you've verified the bug and provided a viable attack scenario. If there does not appear to be a realistic security impact, it is far more likely the report will not qualify for reward or credit.

Sending multiple incomplete reports containing unverified findings from automatic tools in hope for a reward is actually a bad bughunting strategy, and it's better to invest your time in searching for the next, valid bug.