Re: [PATCH 0/2] drm: revert some framebuffer API tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 25, 2024 at 12:41:40PM GMT, Jani Nikula wrote:
> On Tue, 24 Sep 2024, Maxime Ripard <mripard@xxxxxxxxxx> wrote:
> > On Tue, Sep 24, 2024 at 06:56:26PM GMT, Jani Nikula wrote:
> >> On Tue, 24 Sep 2024, Guenter Roeck <linux@xxxxxxxxxxxx> wrote:
> >> >>>> On Tue, Sep 24, 2024 at 12:06:28PM GMT, Simona Vetter wrote:
> >> >>>>> Yeah I think long-term we might want a kunit framework so that we can
> >> >>>>> catch dmesg warnings we expect and test for those, without those warnings
> >> >>>>> actually going to dmesg. Similar to how the lockdep tests also reroute
> >> >>>>> locking validation, so that the expected positive tests don't wreak
> >> >>>>> lockdep for real.
> >> >>>>>
> >> >>>>> But until that exists, we can't have tests that splat in dmesg when they
> >> >>>>> work as intended.
> >> >
> >> > FWIW, that is arguable. More and more tests are added which do add such splats,
> >> > and I don't see any hesitance by developers to adding more. So far I counted
> >> > two alone in this commit window, and that does not include new splats from
> >> > tests which I had already disabled. I simply disable those tests or don't
> >> > enable them in the first place if they are new. I did the same with the drm
> >> > unit tests due to the splats generated by the scaling unit tests, so any
> >> > additional drm unit test splats don't make a difference for me since the
> >> > tests are already disabled.
> >> 
> >> What's the point of having unit tests that CI systems routinely have to
> >> filter out of test runs? Or filter warnings generated by the tests,
> >> potentially missing new warnings. Who is going to run the tests if the
> >> existing CI systems choose to ignore them?
> >
> > If we turn this argument around, that means we can't write unit test for
> > code that will create a warning.
> 
> The question really is, which warnings we get because of the
> functionality being tested, and which warnings are due to an unexpected
> and unrelated bug? Is it a pass or fail if the test triggers an
> unrelated warning?
> 
> If the developer runs the tests, are they expected to look at dmesg and
> inspect every warning to decide?

No, there's no such expectation. Yet Intel's CI chose to do so, and
chose to treat any warning as a failure, despite the fact that kunit
doesn't have the same policy.

> > IMO, this creates a bad incentive, and saying that any capable CI system
> > should reject them is certainly opiniated.
> 
> All I'm saying it generates an extra burden for current real world CI
> systems that run tests on a massive scale and even have paid
> maintainers. It's not trivial to sort out expected and unexpected
> warnings, and keep the filters updated every time the warnings
> change. It's not without a cost. And the end result might just be that
> the unit tests won't get run at all. Or warnings just get completely
> ignored for kunit tests.

I realise it must take a significant amount of resources, but it's also
self inflicted. You could also stop looking for warnings when running
kunit.

> I don't completely disagree with you either. It's just that we don't
> have that automation on the kernel side, and in the mean time, perhaps
> we should be really conservative about the warnings we create in tests?
> 
> If we can't filter out the warnings kernel side, perhaps we could figure
> out a way to annotate the kunit tests that generate warnings on passing
> runs?

Yeah, I think that would be the best solution. That's what Guenther's
series was about :/

Maxime

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux