Handling Blink failures
Chromium has many kinds of bots which run different kinds of builds and tests. The chromium.webkit builders run the Blink-specific tests.
You can monitor them using Sheriff-o-matic , just like the non-Blink bots.
Even among the WebKit bots, there are several different kinds of bots:
- This is where most of the action is, because these bots run Blink's many test suites. The bots are called "layout" bots because one of the biggest test suites "Web Tests" was called LayoutTests, which is found in third_party/blink/web_tests and run as part of the webkit_tests step on these bots. Web tests can have different expected results on different platforms. To avoiding having to store a complete set of results for each platform, most platforms "fall back" to the results used by other platforms if they don't have specific results. Here's a diagram of the expected results fallback graph.
- Leak bots
- This also runs tests, but generally speaking we only care about memory failures on that bot. You can suppress ASAN-specific failures using the web_tests/ASANExpectations file.
- Same deal as the ASAN bot, but catches a different class of failures. You can suppress MSAN-specific failures using the web_tests/MSANExpectations file.
Generally speaking, developers are not supposed to land changes that knowingly break the bots (and the try jobs and the commit queue are supposed to catch failures ahead of time). However, sometimes things slip through ...
Sheriff-O-Matic is a tool that watches the builders and clusters test failures with the changes that might have caused the failures. The tool also lets you examine the failures. There is more documentation here.
To roll back patches, you can use either git revert or drover. You can also use "Revert" button on Gerrit.
The flakiness dashboard is a tool for understanding a test’s behavior over time. Originally designed for managing flaky tests, the dashboard shows a timeline view of the test’s behavior over time. The tool may be overwhelming at first, but the documentation should help.
Comment on the CL or send an email to contact the author. It is patch author's responsibility to reply promptly to your query.
The web platform team has a large number of tests that are flaky, ignored, or unmaintained. We are in the process of finding teams to monitor test directories, so that we can track these test issues better. Please note that this should not be an individual, but a team. If you have ideas/guesses about some of these directories, please reach out to the team and update the sheet. This is the first step, and the long term plan is to have this information on a dashboard/tool somewhere. Watch this space for updates!