On Sat, 11 Mar 2023 at 07:34, Stephen Boyd <sboyd@xxxxxxxxxx> wrote: > > Quoting David Gow (2023-03-10 00:09:48) > > On Fri, 10 Mar 2023 at 07:19, Stephen Boyd <sboyd@xxxxxxxxxx> wrote: > > > > > > > > > Hmm. I think you're suggesting that the unit test data be loaded > > > whenever CONFIG_OF=y and CONFIG_KUNIT=y. Then tests can check for > > > CONFIG_OF and skip if it isn't enabled? > > > > > > > More of the opposite: that we should have some way of supporting tests > > which might want to use a DTB other than the built-in one. Mostly for > > non-UML situations where an actual devicetree is needed to even boot > > far enough to get test output (so we wouldn't be able to override it > > with a compiled-in test one). > > Ok, got it. > > > > > I think moving to overlays probably will render this idea obsolete: > > but the thought was to give test code a way to check for the required > > devicetree nodes at runtime, and skip the test if they weren't found. > > That way, the failure mode for trying to boot this on something which > > required another device tree for, e.g., serial, would be "these tests > > are skipped because the wrong device tree is loaded", not "I get no > > output because serial isn't working". > > > > Again, though, it's only really needed for non-UML, and just loading > > overlays as needed should be much more sensible anyway. > > I still have one niggle here. Loading overlays requires > CONFIG_OF_OVERLAY, and the overlay loading API returns -ENOTSUPP when > CONFIG_OF_OVERLAY=n. For now I'm checking for the config being enabled > in each test, but I'm thinking it may be better to simply call > kunit_skip() from the overlay loading function if the config is > disabled. This way tests can simply call the overlay loading function > and we'll halt the test immediately if the config isn't enabled. > That sounds sensible, though there is a potential pitfall. If kunit_skip() is called directly from overlay code, might introduce a dependency on kunit.ko from the DT overlay, which we might not want. The solution there is either to have a kunit wrapper function (so the call is already in kunit.ko), or to have a hook to skip the current test (which probably makes sense to do anyway, but I think the wrapper is the better option). > > > > > > > > > > That being said, I do think that there's probably some sense in > > > > supporting the compiled-in DTB as well (it's definitely simpler than > > > > patching kunit.py to always pass the extra command-line option in, for > > > > example). > > > > But maybe it'd be nice to have the command-line option override the > > > > built-in one if present. > > > > > > Got it. I need to test loading another DTB on the commandline still, but > > > I think this won't be a problem. We'll load the unittest-data DTB even > > > with KUnit on UML, so assuming that works on UML right now it should be > > > unchanged by this series once I resend. > > > > Again, moving to overlays should render this mostly obsolete, no? Or > > am I misunderstanding how the overlay stuff will work? > > Right, overlays make it largely a moot issue. The way the OF unit tests > work today is by grafting a DTB onto the live tree. I'm reusing that > logic to graft a container node target for kunit tests to add their > overlays too. It will be clearer once I post v2. > > > > > One possible future advantage of being able to test with custom DTs at > > boot time would be for fuzzing (provide random DT properties, see what > > happens in the test). We've got some vague plans to support a way of > > passing custom data to tests to support this kind of case (though, if > > we're using overlays, maybe the test could just patch those if we > > wanted to do that). > > Ah ok. I can see someone making a fuzzer that modifies devicetree > properties randomly, e.g. using different strings for clock-names. > > This reminds me of another issue I ran into. I wanted to test adding the > same platform device to the platform bus twice to confirm that the > second device can't be added. That prints a warning, which makes > kunit.py think that the test has failed because it printed a warning. Is > there some way to avoid that? I want something like > > KUNIT_EXPECT_WARNING(test, <call some function>) > > so I can test error cases. Hmm... I'd've thought that shouldn't be a problem: kunit.py should ignore most messages during a test, unless it can't find a valid result line. What does the raw KTAP output look like? (You can get it from kunit.py by passing the --raw_output option). That being said, a KUNIT_EXPECT_LOG_MESSAGE() or similar is something we've wanted for a while. I think that the KASAN folks have been working on something similar using console tracepoints: https://lore.kernel.org/all/ebf96ea600050f00ed567e80505ae8f242633640.1666113393.git.andreyknvl@xxxxxxxxxx/ Cheers, -- David