The primary function of the LayoutTests is as a "regression test suite". This means that, while we care about whether a page is being rendered correctly, we care more about whether the page is being rendered the way we expect it to. In other words, we look more for changes in behavior than we do for correctness.
When the output doesn't match, there are two potential reasons for it:
In both cases, the convention is to check in a new "-expected" file, even though that file may be codifying errors. This helps us maintain test coverage for all the other things the test is testing while we resolve the bug.
What do to with a failing test
If a test can be rebaselined, it should always be rebaselined instead of adding lines to TestExpectations. Bugs at crbug.com should track fixing incorrect behavior, not lines in TestExpectations.
If a test is never supposed to pass (e.g. it's testing Windows-specific behavior, so can't ever pass on Linux/Mac), move it to the NeverFixTests file. That gets it out of the way of the rest of the project.
There are some cases where you can't rebaseline and, unfortunately, we don't have a better solution than reverting the patch that caused the failure or adding a line to TestExpectations and fixing the bug later. Reverting the patch is strongly preferred.
These are the cases where you can't rebaseline:
1) It's a reftest
2) It gives different output in release and debug
3) It's flaky, crashes or times out.
4) It's for a feature that isn't shipped on some platforms yet, but will shortly.
5) It's a W3C auto-imported test. These tests don't have -expected.txt files, so can't just be rebaselined.
Different TestExpectations files
ASANExpectations: Tests that fail under ASAN
LeakExpectations: Tests that have memory leaks under the leak checker
NeverFixTests: Tests that we never intend to fix (e.g. a test for Windows-specific behavior will never be fixed on Linux/Mac).
SlowTests: Tests that take longer than the usual timeout to run. Slow tests are given 5x the usual timeout.
SmokeTests: A small subset of tests that we run on the Android bot.
StaleTestExpectations: Platform-specific lines that have been in TestExpectations for many months. They're moved here to get them out of the way of people doing rebaselines since they're clearly not getting fixed anytime soon.
TestExpectations: The main test failure suppression file. This should really only be used for flaky lines and NeedsRebaseline/NeedsManualRebaseline lines.
W3CImportExpectations: Expectations for auto-imported w3c tests.
The test expectations are listed in the file LayoutTests/TestExpectations.
The file is not ordered. Hint: Put new changes into a random spot in the file to reduce the chance of merge conflicts when landing your patch.
The syntax of the file is roughly one expectation per line. An expectation can apply to either a directory of tests, or a specific tests. Lines prefixed with "# " are treated as comments, and blank lines are allowed as well.
The syntax of a line is roughly:
which indicates that the "fast/html/keygen.html" test file is expected to crash when run in the Debug configuration on Windows, and the tracking bug for this crash is bug #12345 in the webkit bug repository. Note that the test will still be run, so that we can notice if it doesn't actually crash.
Assuming you're running a debug build on Mac Lion, the following lines are all equivalent (in terms of whether the test is performed and its expected outcome):
Also, when parsing the file, we use two rules to figure out if an expectation line applies to the current run:
For example, if you had the following lines in your file, and you were running a debug build on Mac SnowLeopard:
Again, duplicate expectations are not allowed within the file and will generate warnings.
You can verify that any changes you've made to an expectations file are correct by running:
which will cycle through all of the possible combinations of configurations looking for problems.