On Wed, Sep 17, 2014 at 6:01 PM, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > On Wed, Sep 17, 2014 at 05:54:52PM +0200, Daniel Vetter wrote: >> On Wed, Sep 17, 2014 at 12:34:46PM +0100, Chris Wilson wrote: >> > At the end of a subtest, check for any WARNs or ERRORs (or worse!) >> > emitted since the start of our test and FAIL the subtest if any are >> > found. This will prevent silent failures due to oops from going amiss or >> > being falsely reported as TIMEOUTs. >> > >> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> >> >> We already have this in piglit, including filtering for non-i915 issues >> (which especially on s/r tests happen a lot). So this just duplicates >> that. > > What piglit? I don't see QA reports involving pigligt and they seem to > mistake kernel OOPSes for benign TIMEOUTs quite frequently. Can you please reply with the relevant bugzillas? Since about 2 months QA is supposed to be using the piglit runner for their framework, so any difference in test results compared to what piglit would report is fail. Note though that the piglit timeout support was busted by some refactoring from Dylan Baker, Thomas has patches to fix that again. But if this is indeed a failure from QA then I'll escalate this like mad. >> Also imo it's nice to differentiate between test failures and dmesg noise >> in at least some tests, so clamping to fail isn't the righ thing to do I >> think. > > Also where is my XFAIL? :) The plan is to have a server with piglit jsons of latest -nightly for the full set of machines. QA hasn't yet delivered that though ... -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx