The first step was to provide developers reactive feedback on tests. For example, the system suggested deleting tests that teams spent loads of time maintaining. They then collected metrics on whether the people actually acted on suggestions or not. The system also provided metrics to tech leads and managers to show how teams are doing with tests.
The second step, which is in progress at the moment, is to find patterns and indicators. As they now have identified lots of good and bad tests, the system is looking for common characteristics among them. Once these patterns are collected, algorithms will be designed to identify good and bad tests, and manually calibrated by experts.
Tuesday, December 8, 2009
Here's an article on how Google conducts its automated tests. It gives us a glimpse of how Google manages its testing infrastructure and justifies the testing investment ( Yes, testing effort needs to be justified from time to time). We all know although writing tests are good, but tests that are poorly written will suck up development time, yield little or negative return of investment, and cost disillusion among developers. Here's how Google finds out whether a test is a "good" test (i.e., catches bugs and has a low maintenance cost) or a "bad" test. ( doesn't find bugs and is brittle):
It seems like they are applying pattern recognition in identifying the tests. Kudos to Google.
The software community would benefit as a whole if Google decides to open source this portion of the code.