On Wed, May 31, 2017 at 03:02:18PM +0100, Chris Wilson wrote: > On Wed, May 31, 2017 at 04:45:16PM +0300, Joonas Lahtinen wrote: > > On ke, 2017-05-31 at 13:58 +0100, Chris Wilson wrote: > > > On Wed, May 31, 2017 at 03:23:12PM +0300, Joonas Lahtinen wrote: > > > > > > > > Hello, > > > > > > > > I went through the gem_* tests from intel-gpu-tools and categorized > > > > them into roughly categories "X | X robustness | X performance" ready > > > > to be added to the feat_profile.json. > > > > > > > > Lets open a discussion which ones should go where. I tried to place a > > > > single test to under only one category and I'm kind of hopeful that > > > > we'll have the ability to add "depends_on" to create super features in > > > > the future, instead of placing a single test under multiple categories. > > > > > > > > I didn't check all the subtests nor wildcard matching with other tests, > > > > this is just all the test names placed under some categories. > > > > > > You seem to have assigned them exclusively to one category or another, > > > most tests belong to a few of these categories. More when you consider a > > > subtest may be targetting a completely different aspect. > > > > Yes, that's what I meant to say :) Subtests should probably be matched > > by another pattern like "\btiled\b", "\bflink\b" etc. > > > > Ultimately there would be a resolver which would re-assign the > > subtests: > > > > "Global objects" would then get: > > > > "include_subtests": "flink", > > > > Which would steal subtests with /\bflink\b/ from tests. Do we agree > > that one subtest would be assigned to one category only, or do you > > want to see duplication even at that level? > > I see duplication everywhere. It's more a concept of tags as opposed to > categories. > > The use of such a system would be as > "give me all the tests that exercise relocation" > "give me all the tests that use a context" > "give me all the tests that exercise contention on $mutex" > "give me all the tests that exercise file.c:line / this patch set" file/line is probably out of scope for this, for that we'd need to have coverage information for each testcase, and then we could map a diff to the relevant set of testcases. Nifty, but probably an awful lot of work to automate ... > The last one especially. Yeah that was kinda the idea of this, the entries wouldn't be only an exclusive list of tests only (piglit does that grouping already), but match patterns for a given feature. And a given test can easily show up in multiple features (e.g. a testcase that tests gpu hangs vs. suspend/resume obviously needs to be both a hangcheck test and an s/r test). I think we can just extend stuff though. Anyway, thanks a lot for kickstarting this, unfortunately my vacations start in like 4h so can't really digg into this more before heading back. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx