Any comments before I merge the patch http://review.gluster.org/#/c/13393/ ?
On Mon, Feb 8, 2016 at 3:15 PM, Raghavendra Talur <rtalur@xxxxxxxxxx> wrote:
On Tue, Jan 19, 2016 at 8:33 PM, Emmanuel Dreyfus <manu@xxxxxxxxxx> wrote:On Tue, Jan 19, 2016 at 07:08:03PM +0530, Raghavendra Talur wrote:
> a. Allowing re-running to tests to make them pass leads to complacency with
> how tests are written.
> b. A test is bad if it is not deterministic and running a bad test has *no*
> value. We are wasting time even if the test runs for a few seconds.
I agree with your vision for the long term, but my proposal address the
short term situation. But we could use the retry approahc to fuel your
blacklist approach:
We could immagine a system where the retry feature would cast votes on
individual tests: each time we fail once and succeed on retry, cast
a +1 unreliable for the test.
After a few days, we will have a wall of shame for unreliable tests,
which could either be fixed or go to the blacklist.
I do not know what software to use to collect and display the results,
though. Should we have a gerrit change for each test?This should be the process of adding tests to bad tests list. However, I have run out of time on this one.If someone would like to implement go ahead. I don't see myself trying this any soon.
--
Emmanuel Dreyfus
manu@xxxxxxxxxxThanks for the inputs.I have refactored run-tests.sh to use retry option.If run-tests.sh is started with -r flag, failed tests would be run once again and won't be considered as failed if they pass. Note: Adding -r flag to jenkins config is not done yet.I have also implemented a better version of blacklist which complies with requirements from Manu on granularity of bad tests to be OS.Here is the patch: http://review.gluster.org/#/c/13393/
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel