On 9/24/24 09:57, Maxime Ripard wrote:
On Tue, Sep 24, 2024 at 06:56:26PM GMT, Jani Nikula wrote:
On Tue, 24 Sep 2024, Guenter Roeck <linux@xxxxxxxxxxxx> wrote:
On Tue, Sep 24, 2024 at 12:06:28PM GMT, Simona Vetter wrote:
Yeah I think long-term we might want a kunit framework so that we can
catch dmesg warnings we expect and test for those, without those warnings
actually going to dmesg. Similar to how the lockdep tests also reroute
locking validation, so that the expected positive tests don't wreak
lockdep for real.
But until that exists, we can't have tests that splat in dmesg when they
work as intended.
FWIW, that is arguable. More and more tests are added which do add such splats,
and I don't see any hesitance by developers to adding more. So far I counted
two alone in this commit window, and that does not include new splats from
tests which I had already disabled. I simply disable those tests or don't
enable them in the first place if they are new. I did the same with the drm
unit tests due to the splats generated by the scaling unit tests, so any
additional drm unit test splats don't make a difference for me since the
tests are already disabled.
What's the point of having unit tests that CI systems routinely have to
filter out of test runs? Or filter warnings generated by the tests,
potentially missing new warnings. Who is going to run the tests if the
existing CI systems choose to ignore them?
If we turn this argument around, that means we can't write unit test for
code that will create a warning.
IMO, this creates a bad incentive, and saying that any capable CI system
should reject them is certainly opiniated.
Agreed. All I am saying is that _I_ am rejecting them, but it is up to each
individual testbed (or, rather, testbed maintainer) to decide how to handle
the situation.
Guenter