Re: kdevops BoF at LSFMM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On May 8, 2024, at 1:45 PM, Steve French <smfrench@xxxxxxxxx> wrote:
> 
> I would be very happy if there were an easy way to do three things
> faster/easier:
> 1) make it easier to run a reasonably large set of fs tests automatically
> on checkin of a commit or set of commits (e.g. to an externally visible
> github branch) before it goes in linux-next, and a larger set
> of test automation that is automatically run on P/Rs (I kick these tests
> off semi-manually for cifs.ko and ksmbd.ko today)
> 2) make it easier as a maintainer to get reports of automated testing of
> stable-rc (or automate running of tests against stable-rc by filesystem type
> and send failures to the specific fs's mailing list).  Make the tests run
> for a particular fs more visible, so maintainers/contributors can note
> where important tests are left out against a particular fs

In my experience, these require the addition of a CI
apparatus like BuildBot or Jenkins -- they are not
directly part of kdevops' mission. Scott Mayhew and
I have been playing with BuildBot, and there are some
areas where integration between kdevops and BuildBot
could be improved (and could be discussed next week).


> 3) make it easier to auto-bisect what commit regressed when a failing test
> is spotted

Jeff Layton has mentioned this as well. I don't think
it would be impossible to get kdevops to orchestrate
a bisect, as long as it has an automatic way to decide
when to use "git bisect {good|bad}"


> 6) an easy way to tell if a kdevops run is "suspiciously slow" (ie if a test
> or set of tests is more than 20% slower than the baseline test run, it
> could indicate a performance regression that needs to be bisected
> and identified)

Well sometimes things are just slow because you've built
a test kernel with KASAN and lockdep, or because there are
other jobs running on your test system. Also, due to all
the virtualization involved, it might be difficult to get
consistent performance measurements.

This one seems like it would be hard to engineer, but maybe
there's something that could be done?


--
Chuck Lever






[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux