Summary/Quick StartIf you have a performance autotest (either a wrapper around a Telemetry test for measuring chrome performance, or a more Chromium OS-specific performance autotest) and you want any of its measured perf values displayed as graphs on the performance dashboard, you should complete the following high-level steps. More details for each of the steps below are provided later in this document.
The Performance DashboardThe Chromium OS team will be using the same performance dashboard that Chrome team uses, which is available here: https://chromeperf.appspot.com/ Reasons for sharing Chrome team's dashboard include:
The source code for the dashboard is Google internal-only. Performance test data is organized around several main concepts on the dashboard:
The dashboard itself is externally-visible, but supports internal-only data that can only be viewed if logged into the dashboard with an @google.com account. Data is either internal or external on a per-bot basis. By default, data associated with a bot is considered to be internal-only until it is explicitly allowed to be visible externally (i.e., without having to login to the dashboard). To see performance graphs from the front page of the dashboard, click on the “Performance Graphs” link in the top menu, or else directly navigate to https://chromeperf.appspot.com/report. There, you will see a few dropdowns that let you specify what graph data you want to see. First, select the test name from the left-most dropdown. Next, select the bot name from the next dropdown (bots are sorted alphabetically by name under the master to which they belong). Finally, select the metric name (or graph name) from the right-most dropdown to see the selected performance graph.
If you don't see your data, you may need to log into the dashboard with your @google.com account (see the “Sign in” link on the top-right of the page). This will allow you to also see the internal-only test data. If you still don't see your data, check to make sure your data is getting uploaded to the dashboard in the first place.
Instructions for Getting Data onto the Perf DashboardOutput your perf data in the testIn order to get your test's measured perf data piped through to the perf dashboard, you must have your test invoke self.output_perf_value() for every perf metric measured by your test that you want displayed on the perf dashboard. The output_perf_value function is currently defined here. Here is the function definition: def output_perf_value(self, description, value, units=None, higher_is_better=True, graph=None): """ Records a measured performance value in an output file. The output file will subsequently be parsed by the TKO parser to have the information inserted into the results database. @param description: A string describing the measured perf value. Must be maximum length 256, and may only contain letters, numbers, periods, dashes, and underscores. For example: "page_load_time", "scrolling-frame-rate". @param value: A number representing the measured perf value, or a list of measured values if a test takes multiple measurements. Measured perf values can be either ints or floats. @param units: A string describing the units associated with the measured perf value. Must be maximum length 32, and may only contain letters, numbers, periods, dashes, and underscores. For example: "msec", "fps", "score", "runs_per_second". @param higher_is_better: A boolean indicating whether or not a "higher" measured perf value is considered to be better. If False, it is assumed that a "lower" measured value is considered to be better. @param graph: A string indicating the name of the graph on which the perf value will be subsequently displayed on the chrome perf dashboard. This allows multiple metrics be grouped together on the same graphs. Defaults to None, indicating that the perf value should be displayed individually on a separate graph. """ Example: Suppose you have a perf test that loads a webpage and measures the frames-per-second of an animation on the page 5 times over the course of a minute or so. We want to measure/output the time (in msec) to load the page, as well as the 5 frames-per-second measurements we've taken from the animation. Suppose we've measured these values and have them stored in the following variables: page_load = 173 fps_vals = [34.2, 33.1, 38.6, 35.4, 34.7] To output these values for display on the perf dashboard, we need to invoke self.output_perf_value() twice from the test, once for each measured metric: self.output_perf_value("page_load_time", page_load, "msec", higher_is_better=False) self.output_perf_value("animation_quality", fps_vals, "fps") For the “animation_quality” metric, the perf dashboard will show the average and standard deviation (error) from among all 5 measured values. Note that the “higher_is_better” parameter doesn't need to be specified for “animation_quality”, because it defaults to True and a higher FPS value is indeed considered to be better. Autotest platform_GesturesRegressionTest is an example of a real test that invokes self.output_perf_value(). It's assumed that every perf metric outputted by a test will have a unique “description”, and you need to invoke self.output_perf_value() once for each perf metric (unique description) measured by your test. In some cases, a test may take multiple measurements for a given perf metric. When doing so, the test should output all of these measurements in a list for the “value” parameter. The autotest infrastructure will automatically take care of computing the average and standard deviation of these values, and both of those values will be uploaded to the perf dashboard (standard deviation values are used by the perf dashboard to represent “errors in measurement”, and these errors are depicted in the graphs along with the average values themselves). Group multiple performance metrics into one graphYou can group multiple metrics into one graph by passing a graph name via argument "graph". The value of "graph" defaults to None, indicating each performance metric to be displayed on a separate graph.
As an example, suppose we have an autotest called “myTest” that outputs 4 perf metrics, with descriptions “metric1”, “metric2”, “metric3”, “metric4”. And we want "metric1" and "metric2" to be displayed on a graph called "metricGroupA"; and "metric3" and "metric4" to be displayed on a graph called "metricGroupB". We need to call self.output_perf_value()in the following way.
Specify presentation settingsRequiredThe perf dashboard requires perf data to be sent with a "master name". The term "master" is used because this originally referred to the Chrome Buildbot master name, but for Chrome OS tests that aren't run by Chrome Buildbot, "master" should describe the general category of test, e.g. ChromeOSPerf or ChromeOSWifi. We generally recommend just use "ChromeOSPerf" for Chrome OS tests. Once you have determined the appropriate “master” name, add a new entry to the file perf_dashboard_config.json to specify this. This JSON config file specifies the master name and any overridden presentation settings. It is formatted as a list of dictionaries, where each dictionary contains the config values for a particular perf test (there should be at most one entry in the config file for any given perf test). The dictionary for a test in the JSON file must contain, at a minimum, an “autotest_name” and a “master_name”:
OptionalBy default, without overriding any of the default presentation settings, the name of the test on the dashboard will be the same as the autotest name. If you need to change the name of the test as it appears on the dashboard then you need to add "dashboard_test_name" to your test's entry in the file perf_dashboard_config.json.
Verify your test and changesImportant: Whenever you make any changes to perf_dashboard_config.json, make sure to also run the unit tests in that directory before checking in your changes, because those tests run some sanity checks over the JSON file (e.g., to make sure it's still parseable as proper JSON). You can invoke the unit test file directly: > python perf_uploader_unittest.py. You could also check your autotest result folder. Look for a file called "results-chart.json". This is an intermediate JSON file which contains data that will be later uploaded to the dashboard. Manually check whether this file looks correct. Let the test run a few timesOnce you've modified your test to invoke self.output_perf_value() where necessary, you've specified a sheriff rotation name (“master” name) for your test, and you've overridden the default presentation settings on the perf dashboard if you choose to do so, you're now ready to start getting that perf data sent to the perf dashboard. Check in your code changes and ensure your test starts running in a suite through the autotest infrastructure. Perf data should get uploaded to the perf dashboard as soon as each run of your autotest completes. Notify chrome-perf-dashboard-team@ for regression analysis and alerts (Not required)If you want to have the perf dashboard analyze your data for regressions, you should request to have your data monitored. You can provide an email address to receive alerts. To notify chrome-perf-dashboard-team@, please click here to file a "Monitoring Request" bug with the label "Performance-Dashboard" with the following info. Or send an email to chrome-perf-dashboard-team@google.com
See your perf graphs on the dashboardOnce you're sure your test has run at least a few times in the lab, navigate in your browser to the perf dashboard and look for your perf graphs. Refer to the earlier section in this document called “The Performance Dashboard” for an overview of the dashboard itself, and how to look for your data there. Triage issues
|
Chromium OS > Testing Home >