On Wed, Sep 17, 2014 at 06:13:56PM +0200, Daniel Vetter wrote: > On Wed, Sep 17, 2014 at 6:01 PM, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > On Wed, Sep 17, 2014 at 05:54:52PM +0200, Daniel Vetter wrote: > >> On Wed, Sep 17, 2014 at 12:34:46PM +0100, Chris Wilson wrote: > >> > At the end of a subtest, check for any WARNs or ERRORs (or worse!) > >> > emitted since the start of our test and FAIL the subtest if any are > >> > found. This will prevent silent failures due to oops from going amiss or > >> > being falsely reported as TIMEOUTs. > >> > > >> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > >> > >> We already have this in piglit, including filtering for non-i915 issues > >> (which especially on s/r tests happen a lot). So this just duplicates > >> that. > > > > What piglit? I don't see QA reports involving pigligt and they seem to > > mistake kernel OOPSes for benign TIMEOUTs quite frequently. > > Can you please reply with the relevant bugzillas? Since about 2 months > QA is supposed to be using the piglit runner for their framework, so > any difference in test results compared to what piglit would report is > fail. All reproduction recipes still use the bare test runner, e.g. https://bugs.freedesktop.org/show_bug.cgi?id=83969 from today. Which is also a good example for the test missing the kernel warning that triggered the actual fail. I don't see any problem with having the bare test runner being able to detect an oops during a subtest. I am pretty sure there have been timeouts within the last month or so that have been mutex deadlocks due to a driver oops - but that would require a bit of digging to confirm. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx