Telemetry: Unit tests

Telemetry has two suites of unit tests:

  1. telemetry_unittests in catapult: catapult/telemetry/bin/run_tests # Verifies the core framework functions against the stable browser
  2. telemetry_unittests in chromium repo: src/tools/perf/run_telemetry_tests # Verifies the core framework functions against your browser built in chromium environment
  3. telemetry_perf_unittests: src/tools/perf/run_tests # Verifies the benchmarks that use the framework function properly, this run all the tests in tools/perf/ folder

These are functional tests that do not depend on performance.

Triaging failures

  • Is there a native stack? Since these tests interact with a lot of recorded real world content, they unintentionally end up serving as integration tests which frequently uncover Chromium crashes. If you see a native crash stack (after a TabCrashException or BrowserGoneException), this is guaranteed to be a browser issue. Usually scanning the change log for patches that touch files that show up in the stack will point to the culprit to revert.
  • Is there a Python stack? If there's a Python-only exception, it is very likely, but not guaranteed to be a Telemetry breakage. Look for Telemetry changes in the range for a culprit.
  • Is there a timeout? These could go either way and are tricky to diagnose, move on to local diagnostics.

Running the tests locally

  • Build a version of chrome in which it fails. Can build chrome, chrome_shell_apk, etc. Telemetry itself requires no build targets.
  • Authenticate into Cloud Storage.
  • Run test via:

    $ catapult/telemetry/run_tests <test> --browser=<browser> --chrome-root=<path to chromium src/ dir>

     where <test> can be e.g.




      as a “wildcard” by matching the sub string "BrowserTest".

where <path to chromium src/ dir> is the full path including the src/ at the end and <browser> can be e.g. release


list (for a full list)

  • Follow the steps described in the Diagnosing Test Failures page, the command line flags listed there are accepted by run_tests and may prove useful.

The usage of GDB would need to be possibly done by changing the scripts.

Disabling tests

Tests should generally only be disabled for flakiness. Consistent failures should be diagnosed and the culprit reverted.

The @decorators.Disabled and @decorators.Enabled decorators may be added above any test to enable or disable it. They optionally accept a list of platforms, os versions or browser types. Examples:

from telemetry import decorators

  • @decorators.Disabled # Disabled everywhere
  • @decorators.Enabled('mac') # Only runs on mac
  • @decorators.Disabled('xp') # Run everywhere except windows xp
  • @decorators.Disabled('debug') # Run everywhere except debug builds