But these are not sufficient to scale the team velocity at over 200 commits per day. Big design flaws remain in the way the team is working. In particular, to scale the Chromium team productivity, significant changes in the infrastructure need to happen. In particular, the latency of the testing across platforms need to be drastically reduced. That requires getting the test result in O(1) time, independent of:
To achieve this, sharding a test must be a constant cost. This is what the Swarming integration is about.
To recapitulate the Isolated design doc,
is used to archive all the run time dependencies of a unit tests on the "builder" to Isolate Server. Since the content store is content-addressed by the SHA-1 of the content, only new contents are archived. Then only the SHA-1 of the manifest describing the whole dependency is sent to the Swarming bots, with an index of the shards that it needs to run. That is, 40 bytes for the hash plus 2 integers
is all that is required to know what OS is needed and what files are needed to run a shard of test cases along
How the infrastructure works
For each buildbot slave using Swarming:
- Checks out sources.
- Runs 'isolate tests'. This archives the builds on https://isolateserver.appspot.com.
- Triggers Swarming tasks.
- Runs anything that needs to run locally.
- Collects Swarming tasks results.
The Commit Queue uses Swarming indirectly via the Try Server.
So there is really 2 layers of control involved. The first being Buildbot master
which controls the overall "build", which includes syncing the sources, compiling, requesting the test to be run on Swarming and asking it to report success or failure. The second layer is the Swarming server
itself which "micro-distribute" test shards. Each test shard is actually a subset of the test cases for a single unit test executable. All the unit tests are run concurrently. So for example for a Try Job that requests
to be run, they are all run simultaneously on different Swarming bot, and slow tests, like browser_tests, are further sharded across multiple bots, all simultaneously.
How the Try Server is using Swarming:
How using Swarming directly looks like
- This project is an integral part of the Chromium Continuous Integration infrastructure and the Chromium Try Server.
- While this project will greatly improve the Chromium Commit Queue performance, it has no direct relationship and the performance improvement, while we're aiming for it, is totally a side-effect of the reduced Try Server testing latency.
- Active project members: maruel@, tandrii@, vadimsh@.
- Code: github.com/luci/luci-py.
Everything is done.
The overhead becomes large at around ~6Gib of archived data per build. We're currently in the range of 9Gb to 12Gb generated per build
This project is primarily aimed at reducing the overall latency from "ask for green light signal for a CL" to getting the signal. The CL can be "not committed yet" or "just committed", the former being the Try Server, the later the Continuous Integration servers. The latency is reduced by enabling a higher of parallel shard execution and removing the constant costs of syncing the sources and zipping the test executables, both which are extremely slow, in the orders of minutes.
Other latencies includes;
- Time to archive the dependencies to the Isolate Server.
- Time to trigger a Swarming run.
- Time for the slaves to react to a Swarming run request.
- Time for the slaves to fetch the dependencies, map them in a temporary directory.
- Time for the slaves to cleanup the temporary directory and report back stdout/stderr to the Swarming master.
- Time for the Swaming master to react and return the information to the Swarming client running on the buildbot slave.
All servers run on AppEngine. It scales just fine.
Redundancy and Reliability
There are multiple single points of failures
- The Isolate Server which is hosted on AppEngine.
- The Swarming master, which is also hosted on AppEngine.
- The buildbot masters, which are single-threaded processes written in python.
There is currently no redundancy for the buildbot infrastructure, if a VM dies, it is simply replaced right away by a sysadmin. The swarming bots are intrinsically redundant. The Isolate Server data store isn't redundant or reliable, it can be rebuilt from sources if needed. If it fails, it will block the infrastructure.
Since the whole infrastructure is visible from the internet, like this design doc, proper DACL need to be used. Both the Swarming master and the Isolate Server require valid Google accounts. The credential verification is completely managed by auth_service
All the code (Swarming master, Isolate Server and swarming_client code) are tested in canary before being rolled out to prod. See the Canary Setup above.
Why not a faulty file system like FUSE?
Faulty file systems are inherently slow: every time a file is missing, the whole process hangs, the FUSE adapter downloads the file synchronously, then the process resume. Multiply 8000x; that's what browser_tests lists. With a pre-loaded content-addressed file-system, all the files can be cached safely locally and be downloaded simultaneously. The saving and speed improvement is enormous.