On Tue, Mar 20, 2018 at 01:32:17PM +0200, Laurent Pinchart wrote: > Hello, > > On Monday, 19 March 2018 18:41:05 EET Ulrich Hecht wrote: > > On Fri, Mar 16, 2018 at 9:55 AM, Daniel Vetter <daniel@xxxxxxxx> wrote: > > > On Thu, Mar 15, 2018 at 03:45:36PM +0100, Ulrich Hecht wrote: > > >> Hi! > > >> > > >> I have run the tests on a Renesas R-Car M3-W's DU device, and have found > > >> a number of false negatives that mostly stem from use of Intel-specifics > > >> without checking if that makes sense first. So here's a bunch of fixes > > >> for those, hope they are generic enough for upstreaming. > > > > > > Nice, other people using this! Do you plan to maintain this actively going > > > forward, or is this more a one-off effort? > > > > For now, this is just an attempt at evaluating if this works for us. > > It has caught a few things that look like legitimate bugs to me, > > though... > > That's good news ! (Not that I'm happy that we have bugs, but catching them > shows that igt is useful for us). I hope this will help convincing management > that we should keep contributing to igt going forward. Yeah I'm really hoping other vendors could join the fun, and long-term we'd have a real kms validation suite. There's always going to be a need for vendor-specific tests (and we're happy to merge them, see e.g. vc4), but having a test suite that tries to be generic as much as possible, for an uapi that tries to be generic too, seems like a really good idea. Very much welcome on board! Aside: If there's anything we can do to help convince your management that this is good idea (like the rename from intel-gpu-tools to igt gpu tests we've done), please bring it up. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch