This set of pages documents the setup and operation of the GPU bots and try servers, which verify the correctness of Chrome's graphically accelerated rendering pipeline.
The GPU bots run a different set of tests than the majority of the Chromium test machines. The GPU bots specifically focus on tests which exercise the graphics processor, and whose results are likely to vary between graphics card vendors.
Most of the tests on the GPU bots are run via the Telemetry framework. Telemetry was originally conceived as a performance testing framework, but has proven valuable for correctness testing as well. Telemetry directs the browser to perform various operations, like page navigation and test execution, from external scripts written in Python. The GPU bots launch the full Chromium browser via Telemetry for the majority of the tests. Using the full browser to execute tests, rather than smaller test harnesses, has yielded several advantages: testing what is shipped, improved reliability, and improved performance.
A subset of the tests, called "pixel tests", grab screen snapshots of the web page in order to validate Chromium's rendering architecture end-to-end. Where necessary, GPU-specific results are maintained for these tests. Some of these tests verify just a few pixels, using handwritten code, in order to use the same validation for all brands of GPUs.
The GPU bots use the Chrome infrastructure team's recipe framework, and specifically the chromium and chromium_trybot recipes, to describe what tests to execute. Compared to the legacy master-side buildbot scripts, recipes make it easy to add new steps to the bots, change the bots' configuration, and run the tests locally in the same way that they are run on the bots. Additionally, the chromium and chromium_trybot recipes make it possible to send try jobs which add new steps to the bots. This single capability is a huge step forward from the previous configuration where new steps were added blindly, and could cause failures on the tryservers. For more details about the configuration of the bots, see the GPU bot details.
The physical hardware for the GPU bots lives in the Swarming pool*. The Swarming infrastructure (new docs, older but currently more complete docs) provides many benefits:
(*All but a few one-off GPU bots are in the swarming pool. The exceptions to the rule are described in the GPU bot details.)
The bots on the chromium.gpu.fyi waterfall are configured to always test top-of-tree ANGLE. This setup is done with a few lines of code in the tools/build workspace; search the code for "angle".
These aspects of the bots are described in more detail below, and in linked pages. There is a presentation which gives a brief overview of this documentation and links back to various portions.
Most Chromium developers interact with the GPU bots in two ways:
The GPU bots are grouped on the chromium.gpu and chromium.gpu.fyi waterfalls. Their current status can be easily observed there.
To send try jobs, you must first upload your CL to the codereview server. Then, either clicking the "CQ dry run" link or running from the command line:
Sends your job to the default set of try servers.
The GPU tests are part of the default set for Chromium CLs, and are run as part of the following tryservers' jobs:
Scan down through the steps looking for the text "GPU"; that identifies those tests run on the GPU bots. For each test the "trigger" step can be ignored; the step further down for the test of the same name contains the results.
It's usually not necessary to explicitly send try jobs just for verifying GPU tests. If you want to, you must invoke "git cl try" separately for each tryserver master you want to reference, for example:
git cl try -b linux_chromium_rel_ng
git cl try -b mac_chromium_rel_ng
git cl try -b win_chromium_rel_ng
Alternatively, the Rietveld UI can be used to send a patch set to these try servers.
Three optional tryservers are also available which run additional tests. As of this writing, they ran longer-running tests that can't run against all Chromium CLs due to lack of hardware capacity. They are added as part of the included tryservers for code changes to certain sub-directories.
Tryservers for the ANGLE project are also present on the tryserver.chromium.angle waterfall. These are invoked from the Gerrit user interface. They are configured similarly to the tryservers for regular Chromium patches, and run the same tests that are run on the chromium.gpu.fyi waterfall, in the same way (e.g., against ToT ANGLE).
If you find it necessary to try patches against other sub-repositories than Chromium (src/) and ANGLE (src/third_party/angle/), please file a bug with label Cr-Internals-GPU-Testing.
All of the GPU tests running on the bots can be run locally from a Chromium build. Many of the tests are simple executables:
Some run only on the chromium.gpu.fyi waterfall, either because there isn't enough machine capacity at the moment, or because they're closed-source tests which aren't allowed to run on the regular Chromium waterfalls:
The remaining GPU tests are run via Telemetry. In order to run them, just build the chrome target and then invoke src/content/test/gpu/run_gpu_integration_test.py with the appropriate argument. The tests this script can invoke are in src/content/test/gpu/gpu_tests/ . For example:
run_gpu_test.py maps --browser=release
You can also run a subset of tests with this harness:
Some of the tests are still hosted by the legacy harness, run_gpu_test.py. To invoke it:
Test filtering is done in this harness using the --story-filter argument.
Figuring out the exact command line that was used to invoke the test on the bots can be a little tricky. The bots all(*) run their tests via Swarming and isolates, meaning that the invocation of a step like "[trigger] webgl_conformance_tests on NVIDIA GPU..." will look like:
You can figure out the additional command line arguments that were passed to each test on the bots by examining the trigger step and searching for the argument separator (" -- "). For a recent invocation of webgl_conformance_tests, this looked like:
You can leave off the --isolated-script-test-output argument, so this would leave a full command line of:
The Maps test requires you to authenticate to cloud storage in order to access the Web Page Reply archive containing the test. See Cloud Storage Credentials for documentation on setting this up.
You can find the isolates for the various tests in src/chrome/:
The isolates contain the full or partial command line for invoking the target. The complete command line for any test can be deduced from the contents of the isolate plus the stdio output from the test's run on the bot.
Note that for the GN build, the isolates are simply described by build targets, and gn_isolate_map.pyl describes the mapping between isolate name and build target, as well as the command line used to invoke the isolate. Once all platforms have switched to GN, the .isolate files will be obsolete and be removed.
(*) A few of the one-off GPU configurations on the chromium.gpu.fyi waterfall run their tests locally rather than via swarming, in order to decrease the number of physical machines needed.
Any binary run remotely on a bot can also be run locally, assuming the local machine loosely matches the architecture and OS of the bot.
The easiest way to do this is to find the ID of the swarming task and use "swarming.py reproduce" to re-run it:
The task ID can be found in the stdio for the "trigger" step for the test. For example, look at a recent build from the Mac 10.10 Release (Intel) bot, and look at the gl_unittests step. You will see something like:
There is a difference between the isolate's hash and Swarming's task ID. Make sure you use the task ID and not the isolate's hash.
As of this writing, there seems to be a bug when attempting to re-run the Telemetry based GPU tests in this way. For the time being, this can be worked around by instead downloading the contents of the isolate. To do so, look more deeply into the trigger step's log:
As of this writing, the isolate hash appears twice in the command line. To download the isolate's contents into directory "foo" (note, this is in the "Help" section associated with the page for the isolate's task, but I'm not sure whether that's accessible only to Google employees or all members of the chromium.org organization):
isolateserver.py will tell you the approximate command line to use. You should concatenate the TEST_ARGS highlighted in red above with isolateserver.py's recommendation. The ISOLATED_OUTDIR variable can be safely replaced with /tmp.
Note that isolateserver.py downloads a large number of files (everything needed to run the test) and may take a while. There is a way to use run_isolated.py to achieve the same result, but as of this writing, there were problems doing so, so this procedure is not documented at this time.
Before attempting to download an isolate, you must ensure you have permission to access the isolate server. Full instructions can be found here. For most cases, you can simply run:
./src/tools/swarming_client/auth.py login --service=https://isolateserver.appspot.com
The above link requires that you log in with your @google.com credentials. It's not known at the present time whether this works with @chromium.org accounts. Email kbr@ if you try this and find it doesn't work.
Once you have followed the instructions on testing your own isolates for the GYP_DEFINES and authentication needed to upload isolates to the isolate server, you can also run your locally built tests by invoking run_isolated.py on the bot itself:
Note however that precautions need to be taken when logging on to the Swarming bots to not perturb their execution. If you are a Google employee then please consult the Chrome Internal GPU Pixel Wrangling Instructions for details on temporarily taking a machine out of the swarming pool for testing purposes.
(TODO(kbr): consult with maruel@ about the new recommended way to do this using swarming.py, launching and collecting the task's results.)
The goal of the GPU bots is to avoid regressions in Chrome's rendering stack. To that end, let's add as many tests as possible that will help catch regressions in the product. If you see a crazy bug in Chrome's rendering which would be easy to catch with a pixel test running in Chrome and hard to catch in any of the other test harnesses, please, invest the time to add a test!
There are a couple of different ways to add new tests to the bots:
Adding new tests to the GTest-based harnesses is straightforward and essentially requires no explanation.
As of this writing it isn't as easy as desired to add a new test to one of the Telemetry based harnesses. See http://crbug.com/352807 . Let's collectively work to address that issue. It would be great to reduce the number of steps on the GPU bots, or at least to avoid significantly increasing the number of steps on the bots. The WebGL conformance tests should probably remain a separate step, but some of the smaller Telemetry based tests (context_lost_tests, memory_test, etc.) should probably be combined into a single step.
If you are adding a new test to one of the existing tests (e.g., pixel_test), all you need to do is make sure that your new test runs correctly via isolates. See the documentation from the GPU bot details on adding new isolated tests for the GYP_DEFINES and authentication needed to upload isolates to the isolate server. Most likely the new test will be Telemetry based, and included in the telemetry_gpu_test_run isolate. You can then invoke it via:
The tests that are run by the GPU bots are described by a couple of JSON files in the Chromium workspace:
These files are autogenerated by the following script:
This script is completely self-contained and should hopefully be self-explanatory. The JSON files are parsed by the chromium and chromium_trybot recipes, and describe two types of tests:
Tryjobs which add new test steps to the chromium.gpu.json file will run those new steps during the tryjob, which helps ensure that the new test won't break once it starts running on the waterfall.
Tryjobs which modify chromium.gpu.fyi.json can be sent to the win_optional_gpu_tests_rel, mac_optional_gpu_tests_rel and linux_optional_gpu_tests_rel tryservers to help ensure that they won't break the FYI bots.
Adding new pixel tests which require reference images is a slightly more complex process than adding other kinds of tests which can validate their own correctness. There are a few reasons for this.
When making a Chromium-side change which changes the pixel tests' results:
It's critically important to aggressively investigate and eliminate the root cause of any flakiness seen on the GPU bots. The bots have been known to run reliably for days at a time, and any flaky failures that are tolerated on the bots translate directly into instability of the browser experienced by customers. Critical bugs in subsystems like WebGL, affecting high-profile products like Google Maps, have escaped notice in the past because the bots were unreliable. After much re-work, the GPU bots are now among the most reliable automated test machines in the Chromium project. Let's keep them that way.
Flakiness affecting the GPU tests can come in from highly unexpected sources. Here are some examples: