Re: IGT conventions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 15, 2014 at 05:26:28PM -0600, Jeff McGee wrote:
> I have a few questions about conventions observed in writing IGT tests.
> 
> I don't see any standard wrapper for logging other than the logging that goes
> with certain igt_ control flow functions. Is it recommended to limit logging to
> just these? I see some different approaches to supporting verbose modes. Is
> it just up to each test?

As long as you only print stuff to stdout you can be rather excessive imo.
Some tests are rather loud by default, others are not. Atm we don't really
have a concept of verbose mode, but some engineers have implemented it
since it helped them with developing their feature. Personally I don't
care since I run igts through piglit, which captures all the output anyway
(well, as long as you don't go ahead and dump a few megabytes of noise
ofc).

> Any recommendations on subtest granularity? Testing boils down to repeated
> cycles of 'do something' then 'assert something'. Just wondering if there is a
> guideline on how many of those cycles should each subtest contain. Probably
> this is very case specific.

Whatever looks suitable. Personally I think for very specific interface
tests it makes a lot of sense to split them all up into subtests, but for
more generic stress testing bigger tests also make sense. Also note that
we still have a pile of testcases that go back before the subtests
infrastructure, but I think most of them are now split up into subtests
where it makes sense.

> Also wondering if something like an igt_warn function to go with igt_require
> and igt_assert has been considered. There might be a case where some condition
> is not met which causes the test to become limited in its effectiveness but
> still valid. We might still want to run the test and let it pass but attach a
> caveat. Or would adding this gray area just be too complicating.

Anything you put out to stderr will be tracked as a "warn" in piglit. Atm
we don't have any such use-case though I think, mostly since keeping
unbuffer stderr and buffered stdout in sync is a pain ;-) But I guess we
could formalize this a bit if you see it useful for you with a

#define igt_warn(a...) fprintf(stderr, a)

or something like that. Some of the checks in kms_flip.c might benefit
from this, since on a lot of our platforms the rather stringent timing
checks often fail randomly. But besides such corner-cases I kinda prefer
if we just split up testcases more instead of trying to be really clever
with the level of fail encounter and reported.

On that topic: A lot of the tests also depend upon in-kernel checks. With
piglit we capture dmesg, and as a rule anything above the info log level
is counted as a failure. At least that is how piglit treats dmesg output
for i-g-t testcases, and this is also what our QA reports. So if a
testcase hits upon a debug DRM_ERROR then that's considered a bug (and we
have a steady flow of patches to demote such debug leftovers from
development to DRM_DEBUG/INFO as appropriate).

Hopefully this clarifies things a bit. Comments and suggestions highly
welcome, especially if you see some need with your own infrastructure for
structured output/data from tests.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux