For Developers‎ > ‎How-Tos‎ > ‎

GPU Bots & Pixel Wrangling










GPU Pixel Wrangling

GPU Pixel Wrangling is the process of keeping various GPU bots green. On the GPU bots, tests run on physical hardware with real GPUs, not in VMs like the majority of the bots on the Chromium waterfall.

GPU Bots

Waterfalls

The waterfalls work much like any other; see the Tour of the Chromium Buildbot Waterfall for a more detailed explanation of how this is laid out. We have more subtle configurations because the GPU matters, not just the OS and release v. debug. Hence we have Windows Nvidia Release bots, Mac Intel Debug bots, and so on.

The waterfalls we’re interested in are:

Test Suites

The bots run several test suites. The majority of them have been migrated to the Telemetry harness, and are run within the full browser, in order to better test the code that is actually shipped. As of this writing, the tests included:
  • Tests using the Telemetry harness:
    • The WebGL conformance tests: src/content/test/gpu/gpu_tests/webgl_conformance.py
    • A Google Maps test: src/content/test/gpu/gpu_tests/maps.py
    • Context loss tests: src/content/test/gpu/gpu_tests/context_lost.py
    • GPU process launch tests: src/content/test/gpu/gpu_tests/gpu_process.py
    • Hardware acceleration validation tests: src/content/test/gpu/gpu_tests/hardware_accelerated_feature.py
    • GPU memory consumption tests: src/content/test/gpu/gpu_tests/memory.py
    • Pixel tests validating the end-to-end rendering pipeline: src/content/test/gpu/gpu_tests/pixel.py
  • content_gl_tests: see src/content/content_tests.gypi
  • gles2_conform_test (requires internal sources): see src/gpu/gles2_conform_support/gles2_conform_test.gyp
  • gl_tests: see src/gpu/gpu.gyp
  • angle_unittests: see src/gpu/gpu.gyp
Additionally, the Release bots run:
  • tab_capture_performance_tests: see performance_browser_tests in src/chrome/chrome_tests.gypi and src/browser/extensions/api/tab_capture/tab_capture_performancetest.cc

Wrangling

Prerequisites

  1. Ideally a wrangler should be both WebKit and Chromium committer. There's a wrangling schedule. If you're in rotation, there will be an email notifying you of the upcoming stint.
  2. Apply for access to the bots.

How to Keep the Bots Green

  1. Watch for redness on the tree.
    1. The bots are expected to be green all the time. Flakiness on these bots is neither expected nor acceptable.
    2. If a bot goes consistently red, it's necessary to figure out whether a recent CL caused it, or whether it's a problem with the bot or infrastructure.
    3. If it looks like a problem with the bot (deep problems like failing to check out the sources, the isolate server failing, etc.) notify the Chromium troopers. See the general tree sheriffing page for more details.
    4. Otherwise, examine the builds just before and after the redness was introduced. Look at the revisions in the builds before and after the failure was introduced. Depending on whether you're looking at the Chromium or Blink trees, use either the Chromium or Blink revisions. Unfortunately, you'll need to construct your regression URL manually:
      1. For regressions on the Chromium tree: use this URL and replace "[rev1]" and "[rev2]" in the "range=[rev1]:[rev2]" URL query parameter
      2. For regressions on the Blink tree: use this URL and replace "[rev1]" and "[rev2]" in the "range=[rev1]:[rev2]" URL query parameter
    5. File a bug capturing the regression range and excerpts of any associated logs. Regressions should be marked P1. CC engineers who you think may be able to help triage the issue. Keep in mind that the logs on the bots expire after a few days, so make sure to add copies of relevant logs to the bug report.
    6. Study the regression range carefully. Changes outside the Chromium tree (i.e., in /trunk/tools/ rather than /trunk/src/) may break the GPU bots, because the GPU recipe which drives these bots is in the tools repository.
    7. Use drover to revert any CLs which break the GPU bots. In the revert message, provide a clear description of what broke, links to failing builds, and excerpts of the failure logs, because the build logs expire after a few days.
  2. Make sure the bots are running jobs.
    1. Keep an eye on the console views of the various bots.
    2. Make sure the bots are all actively processing jobs. If they go offline for a long period of time, the "summary bubble" at the top may still be green, but the column in the console view will be gray.
    3. Email the Chromium troopers if you find a bot that's not processing jobs.
  3. Make sure the GPU try servers are in good health.
    1. Examine the waterfall at http://build.chromium.org/p/tryserver.chromium.gpu/waterfall . Another useful view is http://build.chromium.org/p/tryserver.chromium.gpu/builders .
    2. Drill down into individual builder/tester pairs like win_gpu and win_gpu_triggered_tests.
    3. See if there are any pervasive build or test failures. Note that test failures are expected on these bots: individuals' patches may fail to apply, fail to compile, or break various tests. Look specifically for patterns in the failures. It isn't necessary to spend a lot of time investigating each individual failure. (Use the "Show: 200" link at the bottom of the page to see more history.)
    4. If the same set of tests are failing repeatedly, look at the individual runs. See whether they're all running on the same machine. If they are, something might be wrong with the hardware. Links to individual buildslaves like gpulin8 are on pages like linux_gpu_triggered_tests. Note that the individual buildslaves' pages do not have many builds available for history. Issue 363730 was a recent example of a hardware failure diagnosed in this manner.
    5. If you see the same test failing in a flaky manner across multiple machines and multiple CLs, it's crucial to investigate why it's happening. Issue 395914 was a recent example of an innocent-looking Blink change which made it through the commit queue and introduced widespread flakiness in a range of GPU tests. The failures were also most visible on the try servers as opposed to the main waterfalls.
    6. Watch Chrome Monitor's GPU tryserver pages to see if any of the tryservers seem to be falling far behind (hundreds of jobs queued up). Look at a page like that for win_gpu_triggered_tests. If so, email the Chromium troopers for help.
  4. Check if any pixel test failures are actual failures or need to be rebaselined.
    1. For a given build failing the pixel tests, click the "stdio" link of the "pixel" step.
    2. The output will contain a link of the form http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?242523_Linux_Release_Intel__telemetry
    3. Visit the link to see whether the generated or reference images look incorrect.
    4. All of the reference images for all of the bots are stored in cloud storage under the link https://cloud.google.com/console#/storage/chromium-gpu-archive/reference-images/ . They are indexed by version number, OS, GPU vendor, GPU device, and whether or not antialiasing is enabled in that configuration. You can download the reference images individually to examine them in detail.
  5. Rebaseline pixel test reference images if necessary.
    1. Increment the revision number of the particular test in src/content/test/gpu/page_sets/pixel_tests.json .
    2. When this is committed, all of the bots will generate new reference images for the new version of the test.
    3. Alternatively, if absolutely necessary, you can use the Chrome Internal GPU Pixel Wrangling Instructions to delete just the broken reference images for a particular configuration.
  6. Update WebGL Conformance Test expectations if necessary: src/content/test/gpu/gpu_tests/webgl_conformance_expectations.py.
    1. See the header of the file a list of modifiers to specify a bot configuration.  We can specify OS (down to a specific version, say, Windows 7 or Mountain Lion), GPU vendor (NVIDIA/AMD/Intel), and a specific GPU device.
    2. The key is to maintain the highest coverage: if you have to disable a test, disable it only on the specific configurations it's failing. Note that it is not possible to discern between Debug and Release configurations.
  7. All other tests, do the regular DISABLE_ / FLAKY_ / etc thing.
  8. (Rarely) Update the version of the WebGL conformance tests. See below.

When Bots Misbehave (SSHing into a bot)

  1. See the Chrome Internal GPU Pixel Wrangling Instructions for information on ssh'ing in to the GPU bots.

Updating the WebGL Conformance Tests

Occasionally a bug in the WebGL conformance tests will be exposed by a WebKit roll, and the best solution is to roll forward to a new version of the WebGL conformance suite in which the bug has been fixed. In order to do this, follow the steps below.

  1. Visit https://chromium.googlesource.com/external/khronosgroup/webgl.git with your browser.
  2. Find the full git hash of the revision you want to roll to.
  3. Modify the entry for src/third_party/webgl/src in src/DEPS with the new hash.
  4. Send the CL to the GPU try servers (win_gpu, linux_gpu, mac_gpu).
  5. If the CL looks good, commit it.
  6. Watch the GPU bots on the various waterfalls. There are more OS and GPU combinations than the GPU servers can reasonably try. An update to the WebGL conformance suite is likely to fail on one or more bots.
  7. Update the WebGL conformance suite expectations to suppress failures if necessary. File bugs about the need for these suppressions so they can be removed in the future.
Extending the GPU Pixel Wrangling Rotation

    See the Chrome Internal GPU Pixel Wrangling Instructions for information on extending the rotation.

Comments