John Penix, Google
The primary Google code base receives up to 20+ code changes per minute with 50% of the source files changing every month. Most products are developed and released from ‘head’ and released on weekly or even daily schedules. To keep things working while we iterate quickly, we rely heavily on automated testing - running millions of
tests each day.
To make this possible, we have built a large-scale continuous testing service utilizing our cloud computing infrastructure. The scale - the change rate, the number of tests and the size of the code base - restricts the types of test selection and prioritization schemes that can be applied. In addition, the system is used to both quickly and accurately identify changes that cause test failures and to identify potential release candidates where all tests pass, which can be competing goals in terms of test prioritization. In this talk, I’ll describe the basic architecture of the test automation system and the supporting build system infrastructure. I’ll then describe the coarse-grained dependency analysis being used to select tests and how build system optimizations can compensate for imprecise dependency analysis. I’ll also discuss how the usage of the system has changed and how we are currently evolving the test prioritization scheme.
John Penix is a Senior Software Engineer on Google's Developer Infrastructure team and is currently the technical lead for the Test Automation Platform. He was previously the technical lead for an enterprise-wide deployment of static analysis tools including
integrating results into the developer workflow. Prior to joining Google, John was a Computer Scientist at NASA's Ames Research Center, splitting his time between project management and applying model checking tools to software. John received a Ph.D. in Computer Engineering from the University of Cincinnati. He is a member of the Steering Committee for the IEEE/ACM International Conference on Automated Software Engineering.