Re: [PATCH 0/2] drm: revert some framebuffer API tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 25, 2024 at 01:52:02PM GMT, Simona Vetter wrote:
> On Tue, Sep 24, 2024 at 08:09:38AM -0700, Guenter Roeck wrote:
> > On 9/24/24 06:56, Maxime Ripard wrote:
> > > On Tue, Sep 24, 2024 at 06:37:59AM GMT, Guenter Roeck wrote:
> > > > On 9/24/24 04:54, Maxime Ripard wrote:
> > > > > +Guenter
> > > > > 
> > > > > On Tue, Sep 24, 2024 at 12:06:28PM GMT, Simona Vetter wrote:
> > > > > > On Tue, Sep 17, 2024 at 08:43:50PM +0300, Jani Nikula wrote:
> > > > > > > The tests consistently trigger WARNs in drm_framebuffer code. I'm not
> > > > > > > sure what the point is with type of belts and suspenders tests. The
> > > > > > > warnings *are* the way to flag erroneous API usage.
> > > > > > > 
> > > > > > > Warnings in turn trigger failures in CI. Filtering the warnings are
> > > > > > > error prone, and, crucially, would also filter actual errors in case the
> > > > > > > kunit tests are not run.
> > > > > > > 
> > > > > > > I acknowledge there may be complex test cases where you'd end up
> > > > > > > triggering warnings somewhere deep, but these are not it. These are
> > > > > > > simple.
> > > > > > > 
> > > > > > > Revert the tests, back to the drawing board.
> > > > > > 
> > > > > > Yeah I think long-term we might want a kunit framework so that we can
> > > > > > catch dmesg warnings we expect and test for those, without those warnings
> > > > > > actually going to dmesg. Similar to how the lockdep tests also reroute
> > > > > > locking validation, so that the expected positive tests don't wreak
> > > > > > lockdep for real.
> > > > > > 
> > > > > > But until that exists, we can't have tests that splat in dmesg when they
> > > > > > work as intended.
> > > > > 
> > 
> > FWIW, that is arguable. More and more tests are added which do add such splats,
> > and I don't see any hesitance by developers to adding more. So far I counted
> > two alone in this commit window, and that does not include new splats from
> > tests which I had already disabled. I simply disable those tests or don't
> > enable them in the first place if they are new. I did the same with the drm
> > unit tests due to the splats generated by the scaling unit tests, so any
> > additional drm unit test splats don't make a difference for me since the
> > tests are already disabled.
> 
> I think for at least drm the consensus is clear, we won't have kunit tests
> that splat.

Where was that ever discussed?

> Personally I really don't see the point of unit tests that are
> somewhere between unecessarily hard or outright too much pain to
> deploy in a test rig: Either you don't run them (not great), or you
> filter splats and might filter too much (not great either) or you
> filter as little as possible and fight false positives (also kinda
> suboptimal).

Or you don't do any of that, and just rely on the canonical way to run
kunit test and trust it's going to pass tests that do indeed pass, and
fail / warn on those that don't.

> Especially for unit tests this stuff must be totally rock solid.

Is there any evidence that those tests were not rock solid?

> We've had similar discussions in the past around unit tests that wasted
> too much cpu time with randomized combinatorial testing, and we've thrown
> those out too from drm. Test hygiene matters.

We had that discussion because those tests could run for up to around a
minute on certain platforms. It's not the same issue at all?

Maxime

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux