On Thu, Nov 3, 2022 at 11:23 PM Mauro Carvalho Chehab <mauro.chehab@xxxxxxxxxxxxxxx> wrote: > > Hi, > > I'm facing a couple of issues when testing KUnit with the i915 driver. > > The DRM subsystem and the i915 driver has, for a long time, his own > way to do unit tests, which seems to be added before KUnit. > > I'm now checking if it is worth start using KUnit at i915. So, I wrote > a RFC with some patches adding support for the tests we have to be > reported using Kernel TAP and KUnit. Thanks very much for looking into this, and sorry for the delayed response (I've been out sick). I think Daniel has answered most of your questions (thanks, Daniel), and I agree with pretty much everything he's said. In short, I think that it'd be great to have the i915 tests use KUnit where appropriate, and even where KUnit isn't the ideal tool, using KTAP as a result format would be great. I definitely think that there's a whole bunch of areas of i915 for which KUnit makes sense: the more hardware independent unit tests (things like swizzling/tiling, maybe some command-buffer creation / validation, "utility" functions generally) are an obvious option. If KUnit isn't working for those sorts of tests, that's clearly a deficiency in KUnit that we'll want to rectify (though it might take some time to do so). The more hardware-specific stuff probably isn't as good a fit for KUnit, but if using KUnit is the easiest way to do test management/assertion macros/KTAP output/etc., then it may be worth using whatever parts of it make sense. I'd prefer it if any tests which depend strongly on specific hardware were marked as such, and maybe lived under a different Kconfig option (which might not be auto-enabled by KUNIT_ALL_TESTS). Though as long as the tests are skipped if the hardware isn't present (which seems to be the case from running them under qemu), it's not a real problem to have them. It's not something we plan to "officially support", though, so if the requirements of hardware-specific tests and more traditional unit tests conflict, KUnit will lean towards supporting the non-hardware-specific ones. > > There are basically 3 groups of tests there: > > - mock tests - check i915 hardware-independent logic; > - live tests - run some hardware-specific tests; > - perf tests - check perf support - also hardware-dependent. > > As they depend on i915 driver, they run only on x86, with PCI > stack enabled, but the mock tests run nicely via qemu. > > The live and perf tests require a real hardware. As we run them > together with our CI, which, among other things, test module > unload/reload and test loading i915 driver with different > modprobe parameters, the KUnit tests should be able to run as > a module. > > While testing KUnit, I noticed a couple of issues: > > 1. kunit.py parser is currently broken when used with modules > > the parser expects "TAP version xx" output, but this won't > happen when loading the kunit test driver. > > Are there any plans or patches fixing this issue? > Yeah: this is on our to-do list to fix, hopefully pretty soon. > 2. current->mm is not initialized > > Some tests do mmap(). They need the mm user context to be initialized, > but this is not happening right now. > > Are there a way to properly initialize it for KUnit? > This is something we've hit before and don't have a good solution for (as you've found out). I'm not an expert on the mm subsystem, so while it's something we want to support, I don't think anyone quite knows how yet. As a totally wild, untested guess, you may have some luck setting current->mm = current->active_mm, or current->mm = &init_mm? It's definitely true that, even when loaded from modules, current->mm won't be set as KUnit tests run in their own kthreads. Maybe setting mm = active_mm would let us carry that context with us from the module loader to the test... In any case, you're not the only person to hit this issue, so it's definitely something we'd like to work out. > 3. there's no test filters for modules > > In order to be able to do proper CI automation, it is needed to > be able to control what tests will run or not. That's specially > interesting at development time where some tests may not apply > or not run properly on new hardware. > > Are there any plans to add support for it at kunit_test_suites() > when the driver is built as module? Ideally, the best would be to > export a per-module filter_glob parameter on such cases. > Again, this is on the to-do list. It may be implemented as a global property which affects future module loads (and might be able to be changed via, e.g., debugfs), rather than a per-module parameter, but we haven't designed it yet. Alas, module support has always seen a little less love than the built-in UML/qemu-based mode, so it does tend to lag behind a little bit with these sort of features, and tends to be tested less well. Hopefully we can bring it up to scratch soon. > 4. there are actually 3 levels of tests on i915: > - Level 1: mock, live, perf > - Level 2: test group (mmap, fences, ...) > - Level 3: unit tests > > Currently, KUnit seems to have just two levels (test suite and tests). > Are there a way to add test groups there? The closest thing we have at the moment is "parameterised tests", which are really designed for the case where the same test code is being run multiple times with different inputs. It should be possible to use this to hack a third level in (have the "parameter" be an array of name/function-pointer pairs), and kunit.py will parse the results correctly, as KTAP doesn't have this limitation. The other thing you could do is to treat each "test group" as a KUnit suite, and just prefix them with "i915_{mock,life,perf}". This isn't ideal, but you could eventually use the test filtering to split them up. Ultimately, supporting more deeply nested tests is something we're not opposed to doing in KUnit, we've just not had any need for it thus far, so haven't really looked into how we'd design and implement it. Now there's a potential user, we can look into it, though it's likely to be lower-priority here, given there are workarounds. Thanks again, and I hope that helps a bit! Cheers, -- David
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature