In this codelab, you will build a client-side Autotest to check the disk and cache throughput of a ChromiumOS device. You will learn how to:
1. Setup the environment needed for autotest
2. Run and edit a test
3. Write a new test and control file
4. Check results of the testIn the process of doing so you will also learn a little about the autotest framework.
Autotest is an open source project designed for testing the linux kernel. Before starting this codelab you might benefit from scrolling through some upstream documentation on autotest client tests. Autotest is responsible for managing the state of multiple client devices as a distributed system, by integrating a web interface, database, servers and the clients themselves. Since this codelab is about client tests, what follows is a short description of how autotest runs a specific test, on one client.
Autotest looks through all directories in client/tests and client/site_tests for simple python files that begin with ‘control.’. These files contain a list of variables, and a call to job.run_test. The control variables tell autotest when to schedule the test, and the call to run_test tells autotest how. Each test instance is part of a job. Autotest creates this job object and forks a child process to execute its control file.
Note the exec mentioned above is the python keyword, not os.exec
Tests reside in a couple of key locations in your checkout, and map to similar locations on the DUT (Device Under Test). Understanding the layout of these directories might give you some perspective:
In this codelab, we will:
First, get the autotest source:
a. If you Got the Code, you already have autotest.
b. If you do not wish to sync the entire source and reimage a device, you can run tests in a vm.
If the cros_start_vm scripts fails you need to enable virtualization on your workstation. check for /dev/kvm or run ‘sudo kvm-ok’ (you might have to ‘sudo apt-get install cpu-checker’ first). It will either say /dev/kvm exists and kvm acceleration can be used or that /dev/kvm doesn’t and kvm acceleration can NOT be used. In the latter case, hit esc on boot and go to ‘system security:’, turn on virtualization. More information about running tests on a vm can be found here.
Once you have autotest, there are 2 ways to run tests, either using your machine as a server or from the client DUT. Running it directly on the device is faster, but requires invoking it from your server at least once.
1. enter chroot:
2. Invoke test_that, to run login_LoginSuccess on a vm with local autotest bits:
The basic usage of test_that:
TEST can be the name of the test, or suite:suite_name for a suite. For example, to run the smoke suite on a device with board x86-mario
Please see the test_that page for more details.
You have to use test_that at least once so it copies over the test/dependencies before attempting this; If you haven’t, /usr/local/autotest may not exist on the device.
Once you're on the client device:
For python-only changes, test_that uses
The fastest way to edit a test is directly on the client. If you find the text editor on a Chromium OS device non-intuitive then edit the file locally and use a copy tool like rcp/scp to send it to the DUT.
1. Add a print statement to the login_LoginSuccess test you just ran
2. rsync it into /usr/local/autotest/tests on the client
3. run it by invoking autotest_client, as described in the section on Running Tests Directly on the client. Note a print statement won’t show up when the test is run via test_that.
The more formal way of editing a test is to change the source and emerge it. The steps to doing this are very similar to those described in the section on emerging tests. You might want to perform a full emerge if you’ve modified several files, or would like to run your test in an environment similar to the automated build/test pipeline.
A word of caution: copy-pasting from Google Docs has been known to convert consecutive whitespace characters into unicode characters, which will break your control file. Using CTRL-C + CTRL-V is safer than using middle-click pasting on Linux.
Our aim is to create a test which does the following:
1. Create a directory in
2. Create a control file kernel_HdParmBasic/control, a bare minimum control file for the hdparm test:
AUTHOR = "Chrome OS Team" NAME = "kernel_HdParmBasic" TIME = "SHORT" TEST_TYPE = "client" DOC = """ This test uses hdparm to measure disk performance. """ job.run_test('kernel_HdParmBasic', named_arg='kernel test')
To which you can add the necessary control variables as described in the autotest best practices. Job.run_test can take any named arguments, and the appropriate ones will be cherry picked and passed on to the test.
3. Create a test file:
At a bare minimum the test needs a run_once method, which should contain the implementation of the test; it also needs to inherit from test.test. Most tests also need initialize and cleanup methods. Create a test file
Notice how only run_once takes the argument named_arg, which was passed in by the
If you’d like more perspective you might benefit from consulting the troubleshooting doc.
The results folder contains many logs, to analyze client test logging messages you need to find kernel_HdParmBasic.(DEBUG, INFO, ERROR) depending on which logging macro you used. Note: logging message priorities escalate, and debug < info < warning < error. If you want to see all logging messages just look in the debug logs.
Client test logs should be in:
where you will have to replace ‘
You can also find the latest results in
In the DEBUG logs you should see messages like:
Note that print messages will not show up in these logs since we redirect stdout. If you’ve already performed a ‘run_remote’ once you can directly invoke your test on a client, as described in the previous section. Two things to note when using this approach:
a. print messages do show up
b. logging messages are also available under autotest/results/default/
You can import any autotest client helper module with the line
You might also benefit from reading how the framework makes autotest_lib available for you.
kernel_HdParmBasic Needs test.test, so it needs to import test from client/bin.
Looking back at our initial test plan, it also needs to:
This implies running things on the command line, modules to look at are base/site utils.
However common_lib’s ‘utils.py’ conveniently gives us both.
2. Search output for timing numbers.
3. Report this as a result.
If your test manages any state on the DUT it might need initialization and cleanup. In our case the subprocess handles it’s own cleanup, if any. Putting together all we’ve talked about, our run_once method looks like:
Note the use of performance keyvals instead of plain logging statements. The keyvals are written to
kernel_HdParmBasic/kernel_HdParmBasic cache_throughput 4346.76
kernel_HdParmBasic/kernel_HdParmBasic disk_throughput 144.28