Re: [GIT PULL] bcachefs updates for 6.8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jan 20, 2024 at 10:24:09PM -0500, Kent Overstreet wrote:
> On Wed, Jan 17, 2024 at 06:19:43PM +0000, Mark Brown wrote:
> > On Wed, Jan 17, 2024 at 08:03:35AM -0500, James Bottomley wrote:

> > I think that's a *bit* pessimistic, at least for some areas of the
> > kernel - there is commercial stuff going on with kernel testing with
> > varying degrees of community engagement (eg, off the top of my head
> > Baylibre, Collabora and Linaro all have offerings of various kinds that
> > I'm aware of), and some of that does turn into investments in reusable
> > things rather than proprietary stuff.  I know that I look at the
> > kernelci.org results for my trees, and that I've fixed issues I saw
> > purely in there.  kselftest is noticably getting much better over time,
> > and LTP is quite active too.  The stuff I'm aware of is more focused
> > around the embedded space than the enterprise/server space but it does
> > exist.  That's not to say that this is all well resourced and there's no
> > problem (far from it), but it really doesn't feel like a complete dead
> > loss either.

> kselftest is pretty exciting to me; "collect all our integration tests
> into one place and start to standarize on running them" is good stuff.

> You seem to be pretty familiar with all the various testing efforts, I
> wonder if you could talk about what you see that's interesting and
> useful in the various projects?

Well, I'm familiar with the bits I look at and some of the adjacent
areas but definitely not with the testing world as a whole.

For tests themselves there's some generic suites like LTP and kselftest,
plus a lot of domain specific things which are widely used in their
areas.  Often the stuff that's separate either lives with something like
a userspace library rather than just being a purely kernel thing or has
some other special infrastructure needs.

For lab orchestration there's at least:

    https://beaker-project.org/
    https://github.com/labgrid-project/labgrid
    https://www.lavasoftware.org/

Beaker and LAVA are broadly similar in a parallel evolution sort of way,
scalable job scheduler/orchestration things intended for non interactive
use with a lot of overlap in design choices.  LAVA plays nicer with
embedded boards since Beaker comes from RedHat and is focused more on
server/PC type use cases though I don't think there's anything
fundamental there.  Labgrid has a strong embedded focus with facilities
like integrating anciliary test equipment and caters a lot more to
interactive use than either of the other two but AIUI doesn't help so
much with batch usage, though that can be built on top.  All of them can
handle virtual targets as well as physical ones.

All of these need something driving them to actually generate test jobs
and present the results, as well as larger projects there's also people
like Guenter Roeck and myself who run things that amuse us and report
them by hand.  Of the bigger general purpose orchestration projects off
the top of my head there's

    https://github.com/intel/lkp-tests/blob/master/doc/faq.md
    https://cki-project.org/
    https://kernelci.org/
    https://lkft.linaro.org/

CKI and KernelCI are not a million miles apart, they both monitor a
bunch of trees and run well known testsuites that they've integrated,
and have code available if you want to deploy your own thing (eg, for
non-public stuff).  They're looking at pooling their results into kcidb
as part of the KernelCI LF project.  Like 0day is proprietary to Intel
LKFT is proprietary to Linaro, LKFT has a focus on running a lot of
tests on stable -rcs with manual reporting though they do have some best
effort coverage of mainline and -next as well.

There's also a bunch of people doing things specific to a given hardware
type or other interest, often internal to a vendor but for example Intel
have some public CI for their graphics and audio:

    https://intel-gfx-ci.01.org/
    https://github.com/thesofproject/linux/

(you can see the audio stuff doing it's thing on the pull requests in
the SOF repo.)  The infra behind these is a bit task specific AIUI, for
example the audio testing includes a lot of boards that don't have
serial consoles or anything (eg, laptops) so it uses a fixed filesystem
on the device, copies a kernel in and uses grub-reboot to try it one
time.  They're particularly interesting because they're more actively
tied to the development flow.  The clang people have something too using
a github flow:

    https://github.com/ClangBuiltLinux/continuous-integration2

(which does have some boots on virtual platforms as well as just build
coverage.)

> I think a lot of this stems from a lack of organization and a lack of
> communication; I see a lot of projects reinventing things in slightly
> different ways and failing to build off of each other.

There's definitely some NIHing going on in places but a lot of it comes
from people with different needs or environments (like the Intel audio
stuff I mentioned), or just things already existing and nobody wanting
to disrupt what they've got for a wholesale replacement.  People are
rarely working from nothing, and there's a bunch of communication and
sharing of ideas going on.

> > Some of the issues come from the different questions that people are
> > trying to answer with testing, or the very different needs of the
> > tests that people want to run - for example one of the reasons
> > filesystems aren't particularly well covered for the embedded cases is
> > that if your local storage is SD or worse eMMC then heavy I/O suddenly
> > looks a lot more demanding and media durability a real consideration.

> Well, for filesystem testing we (mostly) don't want to be hammering on
> an actual block device if we can help it - there are occasionally bugs
> that will only manifest when you're testing on a device with realistic
> performance characteristics, and we definitely want to be doing some
> amount of performance testing on actual devices, but most of our testing
> is best done in a VM where the scratch devices live entirely in dram on
> the host.

Sure, though there can be limitations with the amount of memory on a lot
of these systems too!  You can definitely do things, it's just not
always ideal - for example filesystem people will tend to default to
using test filesystems sized like the total memory of a lot of even
modern embedded boards so if nothing else you need to tune things down
if you're going to do a memory only test.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux