Re: KUnit issues - Was: [igt-dev] [PATCH RFC v2 8/8] drm/i915: check if current->mm is not NULL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 7 Nov 2022 15:16:17 -0800
Daniel Latypov <dlatypov@xxxxxxxxxx> wrote:

> On Mon, Nov 7, 2022 at 10:38 AM Michał Winiarski
> <michal.winiarski@xxxxxxxxx> wrote:
> >
> > On Thu, Nov 03, 2022 at 04:23:02PM +0100, Mauro Carvalho Chehab wrote:  
> > > Hi,
> > >
> > > I'm facing a couple of issues when testing KUnit with the i915 driver.
> > >
> > > The DRM subsystem and the i915 driver has, for a long time, his own
> > > way to do unit tests, which seems to be added before KUnit.
> > >
> > > I'm now checking if it is worth start using KUnit at i915. So, I wrote
> > > a RFC with some patches adding support for the tests we have to be
> > > reported using Kernel TAP and KUnit.
> > >
> > > There are basically 3 groups of tests there:
> > >
> > > - mock tests - check i915 hardware-independent logic;
> > > - live tests - run some hardware-specific tests;
> > > - perf tests - check perf support - also hardware-dependent.
> > >
> > > As they depend on i915 driver, they run only on x86, with PCI
> > > stack enabled, but the mock tests run nicely via qemu.
> > >
> > > The live and perf tests require a real hardware. As we run them
> > > together with our CI, which, among other things, test module
> > > unload/reload and test loading i915 driver with different
> > > modprobe parameters, the KUnit tests should be able to run as
> > > a module.  
> >
> > Note that KUnit tests that are doing more of a functional/integration
> > testing (on "live" hardware) rather than unit testing (where hardware
> > interactions are mocked) are not very common.
> > Do we have other KUnit tests like this merged?  
> 
> I don't think we have other tests like this.
> 
> > Some of the "live tests" are not even that, being more of a pure
> > hardware tests (e.g. live_workarounds, which is checking whether values
> > in MMIO regs stick over various HW state transitions).
> >
> > I'm wondering, is KUnit the right tool for this job?  
> 
> The main focus of KUnit is for hw-independent tests.
> So in theory: no.
> 
> But I can imagine it could be easier to write the validation via
> KUNIT_EXPECT_EQ and friends as opposed to writing your own kernel
> module w/ its own set of macros, etc.

Right now, i915 has its own way of doing that, for both hw-independent
and live tests. The current patches are keeping them, because it helps 
comparing both approaches, and won't disrupt CI jobs.

However, if we're willing to merge KUnit support, the best would be
to use the same macros/logic for both, as having two different unit
test frameworks used on i915 doesn't make much sense.

So yeah, using KUNIT_EXPECT_EQ and friends for live unit and hw tests
makes sense to me.

The main difference between i915 selftest and KUnit seems to be at
the way the tests are started:

- i915 selftest uses module parameters to run the driver on test mode.
  When such parameters are enabled, the probe() function will run the
  tests.

- from what I understood, KUnit uses a .kunit_test_suites code
  section, which does an early initialization of the tests, calling
  the test suite a lot earlier than probe(). 

Due to such differences, running KUnit against a real hardware requires
to load the real driver first, letting it probe the hardware. Once the
driver is loaded and have the hardware properly initialized, load a 
separate KUnit module that would call the test functions from the driver.
That's the approach took on this series.

It sounds possible to merge both approaches, by adding some helper
functions similar to what kunit_test_suites() do, but, instead of
using a separate segment, run the tests at probe() time.

That would mean a cleaner solution with the usage of all KUnit
macros and, in thesis, it will be equivalent to what i915 selftest
already does.

Another possibility would be to make sure that kunit_test_suites()
will fully initiate the user context, allowing mmap() to work.

> So my first thought is: "if it works, then you can try using it."
> (Might want to take steps like make sure they don't get enabled by
> CONFIG_KUNIT_ALL_TESTS=y).

Yeah, it makes sense to have hw-dependent tests not enabled with
KUNIT_ALL_TESTS. Another approach would be to just skip them when
hw is required[1]. Skipping the tests is needed anyway, as the
hardware probing may fail.

[1] Btw, despite the comments:

	/**
	 * kunit_skip() - Marks @test_or_suite as skipped
	 *
	 * @test_or_suite: The test context object.
	 * @fmt:  A printk() style format string.
	 *
	 * Skips the test. @fmt is given output as the test status
	 * comment, typically the reason the test was skipped.
	 *
	 * Test execution is halted after kunit_skip() is called.
	 */

kunit_skip() doesn't work inside .suite_init, so this would cause
compilation errors:

	static int i915_pci_init_suite(struct kunit_suite *suite)
	{
		if (!hw)
			kunit_skip(suite);
	}


> Talking with David, he seems to have echoed my thoughts.
> David also suggested that maybe the test could use a fake of the hw by
> default, but have an option to run against real hw when available.
> I think that sounds like a good chunk of work, so I don't know if you
> need to worry about that.

Emulating a GPU is a lot of work, and offers their own challenges.
Besides that, the goal here is to check the driver against real
hardware, which may have different steppings and versions. Running
on an emulated hardware defeats such purpose, and may introduce
bugs on its own.

Regards,
Mauro




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux