On 2/2/22 6:15 PM, Rob Herring wrote: > On Wed, Feb 2, 2022 at 4:01 PM Frank Rowand <frowand.list@xxxxxxxxx> wrote: >> >> On 2/2/22 2:29 PM, Rob Herring wrote: >>> On Wed, Feb 2, 2022 at 12:38 PM Frank Rowand <frowand.list@xxxxxxxxx> wrote: >>>> >>>> On 2/2/22 5:31 AM, Naresh Kamboju wrote: >>>>> Linaro started doing Linux kernel Functional Validation (LKFT). >>>>> As part of LKFT recently we have enabled CONFIG_OF_UNITTEST=y in our >>>>> daily test CI. >>>>> >>>>> The output of the test looks as below. The current problem is that we >>>>> have a hard time to see (grep) pass/fail for each individual test. We >>>>> only see a summary at the end with x pass and y fails. >>>> >>>> The FAIL messages are printed at loglevel KERN_ERR. The pass messages >>>> are printed at loglevel KERN_DEBUG. To see the pass messages, set the >>>> loglevel to allow debug output. >>> >>> That alone is not enough. Unless there's a DEBUG define, the >>> pr_debug() is going to print nothing. >> >> I almost mentioned that detail, but decided I didn't need to given my >> reference below to dynamic debug. >> >>> >>>> Unfortunately this can add lots of debug output, unless you use dynamic >>>> debug to only enable debug for drivers/of/unittest.o. There are only >>>> a few other pr_debug() messages in unittest. >>> >>> Dynamic debug is one option. Another would be a module param to enable >>> running the tests >> >> I could implement that. >> >> But that does not address the issue of the individual test pass messages >> being printed at loglevel KERN_DEBUG. Are you thinking I should add a >> second module param that would enable printing the test pass messages >> at the same loglevel as the test fail messages? > > Make them info level perhaps. If someone wants to run the unittests, > then I think we should just print everything. It's already > incomprehensible with all the EXPECT lines... OK. I thought there would be pushback against just printing everything. I'll redo the patch to have unittest print the pass messages always. > >> I'm not up to date on module params. I'm assuming that I can pass these >> new params on the boot command line if I build unittest as a built-in >> instead of as a module. > > Yes. > >>> Then it can be built, but has to be explicitly >>> enabled at boot time. >> >>> A 3rd option is making it work as a module, then >>> it's run when loaded. (That was the original plan.) >>> >>>> I think a better solution would be to add a config option, something >>>> like CONFIG_OF_UNITTEST_VERBOSE, that would print the pass messages >>>> at loglevel KERN_ERR. I'll submit a patch for that and see what the >>>> review responses are. >>> >>> Nak for another config option. >> >> Because? > > It's another config option... Another build combination to test... > Users have to rebuild to change behavior... Thanks for the explanation. -Frank > >>>>> We would like to get your opinion of how hard it would be to include >>>>> that in the output per test. Maybe like TAP version 14? >>>>> Another question would be how hard do you think it would be to rewrite >>>>> this to a kunit test, if even applicable? I have provided the kunit >>>>> output links at the end of this email. >>>> >>>> Devicetree unittests were suggested as a good candidate as a first >>>> test to convert to kunit when kunit was implemented. Brendan tried >>>> to convert it, and we quickly saw that it was not a good candidate. >>>> Devicetree unittests do not fit the unit test mold; they are a very >>>> different creature. Brendan has a good term for this type of test >>>> (Brendan, was it "acceptance" test?). >>> >>> I thought you ended up agreeing with using kunit? Whatever you want to >> >> Not the kunit _framework_. >> >>> call the DT tests, there's not really any good reason to do our own >>> pass/fail messages. >> >> Yes, I would like to change the pass fail messages to follow the same >> standard as kunit, so that the test frameworks could easily process >> the unittest results. That has been on my todo list. > > Ah, I misunderstood. > > Rob >