For Developers‎ > ‎

Performance Try Bots


There are python scripts for automating the process of building, and testing commits against tip of tree and comparing performance. The performance try bots have been built on top of the bisect bot architecture. The bot works by syncing to the specified revision, applying your patch, building Chrome, and running the performance test. It then reverts your patch, builds again, and runs the performance test a second time. Results are output on the waterfall, as well as uploaded to cloud storage.

For information about using the performance try bots to perform a bisect, see Bisecting Performance Regressions.

The performance try server is tryserver.chromium.perf.

Supported Platforms

 Platform  Builder Name
 Linux linux_perf_bisect
 Windows win_perf_bisect
 Mac 10.8 mac_perf_bisect
 Mac 10.9 mac_10_9_perf_bisect
 Android GN android_gn_perf_bisect
 Android Nexus 4 android_nexus4_perf_bisect
 Android Nexus 10
 ChromeOS No Plans ATM

Starting a perf try job
  1. Create new git branch or check out existing branch.
  2. Edit tools/run-perf-test.cfg (instructions in file) or third_party/WebKit/Tools/run-perf-test.cfg
    1. Take care to strip any src/ directories from the head of relative path names
    2. On desktop, only --browser=release is supported, on android --browser=android-chromium-testshell.
    3. You can search through the stdio from performance tests for "run_benchmark" to help figure out the command to use.
  3. Upload your patch. --bypass-hooks is necessary to upload the changes you committed locally to run-perf-test.cfg. But changes to run-perf-test.cfg should never be committed to the project repository.
  4. git cl upload --bypass-hooks
  5. Send your try job to the try server.
  6. git cl try -m tryserver.chromium.perf -b <bot>

Submitting jobs with blink patches
You can also submit jobs with blink patches, but since the bot works by using the config file to pass the parameters of the job you'll need to modify the run-perf-test.cfg file in blink. The file is located in third_party/WebKit/Tools/run-perf-test.cfg.

Tips about test run time

  • Cycle times for the bots can be vastly different, and cycle times for different tests is also different.
  • If it's possible to reproduce the regression on multiple platforms, keep in mind that linux bisects are usually the fastest, followed by windows, then mac, and finally android.
  • You can expect to wait about 20-30 mins for a linux test, mac and windows may take 2-3x longer than linux, and android can take 3-4x longer.