Re: kselftest build broken?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 12, 2019 at 11:13 PM Daniel Díaz <daniel.diaz@xxxxxxxxxx> wrote:
>
> Hello!
>
> On Wed, 12 Jun 2019 at 14:32, shuah <shuah@xxxxxxxxxx> wrote:
> > On 6/12/19 12:29 PM, Dmitry Vyukov wrote:
> [...]
> > > 1. You suggested to install a bunch of packages. That helped to some
> > > degree. Is there a way to figure out what packages one needs to
> > > install to build the tests other than asking you?
> >
> > I have to go through discovery at times when new tests get added. I
> > consider this a part of being a open source developer figuring out
> > dependencies for compiling and running. I don't have a magic answer
> > for you and there is no way to make sure all dependencies will be
> > documented.
>
> This is something we, as users of Kselftests, would very much like to
> see improved. We also go by trial-and-error finding out what is
> missing, but keeping up with the new tests or subsystems is often
> difficult and tend to remain broken (in usage) for some time, until we
> have the resources to look into that and fix it. The config fragments
> is an excellent example of how the test developers and the framework
> complement each other to make things work. Even documenting
> dependencies would go a long way, as a starting point, but I do
> believe that the test writers should do that and not the users go
> figure out what all is needed to run their tests.
>
> Maybe a precheck() on the tests in order to ensure that the needed
> binaries are around?

Hi Daniel,

The Automated Testing effort:
https://elinux.org/Automated_Testing
is working on a standard for test metadata description which will
capture required configs, hardware, runtime-dependencies, etc. I am
not sure what's the current progress, though.

Documenting or doing a precheck is a useful first step. But ultimately
this needs to be in machine-readable meta-data. So that it's possible
to, say, enable as much tests as possible on a CI, rather then simply
skip tests. A skipped test is better then a falsely failed test, but
it still does not give any test coverage.



> For what it's worth, this is the list of run-time dependencies package
> for OpenEmbedded: bash bc ethtool fuse-utils iproute2 iproute2-tc
> iputils-ping iputils-ping6 ncurses perl sudo python3-argparse
> python3-datetime python3-json python3-pprint python3-subprocess
> util-linux-uuidgen cpupower glibc-utils. We are probably missing a
> few.

Something like this would save me (and thousands of other people) some time.



> [...]
> > > 10. Do you know if anybody is running kselftests? Running as in
> > > running continuously, noticing new failures, reporting these failures,
> > > keeping them green, etc.
> > > I am asking because one of the tests triggers a use-after-free and I
> > > checked it was the same 3+ months ago. And I have some vague memories
> > > of trying to run kselftests 3 or so years ago, and there was a bunch
> > > of use-after-free's as well.
> >
> > Yes Linaro test rings run them and kernel developers do. I am cc'ing
> > Naresh and Anders to help with tips on how they run tests in their
> > environment. They have several test systems that they install tests
> > and run tests routine on all stable releases.
> >
> > Naresh and Anders! Can you share your process for running kselftest
> > in Linaro test farm. Thanks in advance.
>
> They're both in time zones where it's better to be sleeping at the
> moment, so I'll let them chime in with more info tomorrow (their
> time). I can share that we, as part of LKFT [1], run Kselftests with
> Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on
> arm, aarch64, x86, and x86-64, *very* often: Our test counter recently
> exceeded 5 million! You can see today's mainline results of Kselftests
> [2] and all tests therein.
>
> We do not build our kernels with KASAN, though, so our test runs don't
> exhibit that bug.

But you are aware of KASAN, right? Do you have any plans to use it?
Dynamic tools significantly improve runtime testing efficiency.
Otherwise a test may expose all of use-after-free, out-of-bounds
write, information leak, potential deadlock, memory leak, etc and
still be considered "everything is fine". Some of these bug may even
be as bad as a remote code execution. I would expect that catching
these would be a reasonable price for running tests somewhat less
often :)
Each of these tools require a one-off investment for deployment, but
then gives you constant benefit on each run.
If you are interested I can go into more details as we do lots of this
on syzbot. Besides catching more bugs there is also an interesting
possibility of systematically testing all error paths.




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux