are tools to respectively trace any executable or specifically a google-test
executable. They report each child processes, their command line, initial current working directory, exit code, each file accessed per each child process in an hierarchical format. This is useful to get data about the amount of files read by each executable and get an idea of how many files are access by each of the processes, including child processes. It is good to know the amount of short-lived child processes that are created.
This is especially frightening to trace browser_tests.
The output format is in json, so it can be easily consumed by a webpage, and is formatted in an OS-independent way as much as possible. OS-specific tracer limitations may limit the amount of information; dtrace has issues tracing the command line and the Windows NT Kernel tracer doesn't trace the initial working directory, because there is no such thing at the kernel level; the current working directory is purely a user-mode implementation on Windows.
If you are looking to generate
files, you are at the wrong place. The scripts below are used as libraries by the higher level scripts documents in the test isolation page
trace_inputs.py runs an executable, let it be Chrome itself or an unit test and uses strace on linux, dtrace* on mac or NT Kernel Logger** on windows to log all the files that were opened or touched. It generates an OS-specific file that can be read to generate a JSON file.
* dtrace requires root password on OSX. you can work around with setuid or visudo but there are security-related implications.
** logman.exe requires elevated command prompt. Its the slowest of the three.
In general, you will run it twice to generate a json file. First to trace it, second to read and analyse the traces generated by the tracer and output a json file.
python tools/swarm_client/trace_inputs.py trace -l log_file out/Release/foo_tests
python tools/swarm_client/trace_inputs.py read -l log_file --root-dir . --json
clean deletes all the logs properly.
python tools/swarm_client/trace_inputs.py clean -l log_file
help prints information about a specific subcommand.
python tools/swarm_client/trace_inputs.py help trace
read reads a trace previously generated with
trace. The example below will strip any file accessed outside of
--root-dir to reduce noise and will replace any path with the corresponding variable with
--variable. Variables are useful to abstract difference with the OS or the build tool, like abstracting the build output directory.
python tools/swarm_client/trace_inputs.py read -l log_file -V PRODUCT_DIR out/Release --root-dir /home/$USER/chrome/src --json
trace executes an executable under the tracer and generates a new trace log. It doesn't output any information. Use
read to read the trace.
python tools/swarm_client/trace_inputs.py trace -l log_file out/Release/base_unittests
To generate a log and then read it:
python tools/swarm_client/trace_inputs.py trace --log out/Release/net_unittests.results.log \
python tools/swarm_client/trace_inputs.py read --log out/Release/net_unittests.results.log \
--root-dir /home/$USER/chrome/src --json
is a wrapper script to start XVFB on linux and is a no-op on other platforms. Any previous log is overwritten.
Takes a given google-test executable and traces all of the tests (running them individually).
trace_inputs.py read must be used to read the trace back. The difference between this and
trace_inputs.py trace is that
trace_inputs.py runs all the tests in the executable serially under a single trace, while
trace_test_cases.py runs one executable per CPU core, configurable with --jobs, and traces each test case individually. The end result is that
trace_test_cases.py is generally faster than
--help for more information.
Runs 1% of the test cases by selecting shard 0 of 100 shards and read the logs
python tools/swarm_client/trace_test_cases.py --index 0 --shards 100 out/Release/net_unittests
python tools/swarm_client/trace_inputs.py read -l out/Release/net_unittests.logs