> IMO, if we start skipping test cases we may not be able to > uncover bugs. Thoughts? The purpose of a regression test is to catch unanticipated problems related to new changes. Once a problem is already known, a series of tradeoffs must be evaluated. * What new information is gained by continuing to run the test? * What are the odds that disabling the test will prevent finding a previously unknown and real problem in other code? * What are the costs of continuing to run the test, especially in terms of endless rebase/retest cycles slowing other development (including bug fixes)? In general, the cost of an individual test is so close to zero that the benefits don't need to be large. However, for some of our most frequently failing tests, the cost has grown quite large. Worse still, the presence of so many such tests has almost led to a complete collapse of our test system, as each blocks fixes for the others. *We still need those tests to pass*, but here are our options. * Tests for X, Y, and Z don't pass, blocking completion of those features. * Tests for X, Y, and Z don't pass, blocking completion of those features *and* impeding progress on others. If you find value in running tests that usually fail, by all means continue to do so. I'm sure we could set up a Jenkins job for that - but it shouldn't IMO be the one that's used for pre-commit verification. _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel