We record page sets so we control network conditions and other changes in the live sites, leading to more stable benchmarks.
Write a page set
Before you can record a page set, you need to write it! If you want to record a pre-existing page set, you can skip this step.
Page sets are located in
tools/perf/page_sets. A simple page set with one URL looks like:
from telemetry.page import page
from telemetry.page import page_set
Telemetry spoofs Chrome's User-Agent field, and
user_agent_type tells it whether to use a desktop, mobile, or tablet user agent. We generally only use one recording for all platforms.
archive_data_file contains metadata about which pages are stored in which archive files. You need to specify its location, and it will be generated when recording the page set.
Record a page set
record_wpr script to record a page set. Your command will look something like this:
src$ tools/perf/record_wpr --browser=(release|system) page_set.py
For example, to record the top_25.py page set:
src$ tools/perf/record_wpr --browser=system tools/perf/page_sets/top_25.py
To update the recording for only some pages in the page set, use
--page-filter. This command will record only Wikipedia pages:
src$ tools/perf/record_wpr --browser=system --page-filter=wikipedia tools/perf/page_sets/top_25.py
record_wpr generates a few files:
.wpr file containing the recorded data. This file is hidden from
git status, which we'll explain next.
.wpr.sha1 file containing the SHA1 hash of the
.json file containing metadata about which
.wpr files store which URLs.
Upload the recording to Cloud Storage
To avoid bloating everyone's Chromium checkouts, we avoid committing the large
.wpr files to source control. Instead, we upload them to Cloud Storage and download them as needed. If you just want to use your recording locally, you can skip this step.
To do this, check in only the
.json files. When you run
git cl upload, a
PRESUBMIT script will upload the
.wpr file to Google Storage.
What is Web Page Replay?
Web Page Replay is a service that allows us to capture and store HTTP requests and responses.