On Wed, May 8, 2024 at 2:48 AM Amir Goldstein <amir73il@xxxxxxxxx> wrote: > > On Tue, May 7, 2024 at 9:44 PM Luis Chamberlain <mcgrof@xxxxxxxxxx> wrote: > > > > Dear LPC session leads, > > > > We'd like to gather together and talk about current ongoing > > developments / changes on kdevops at LSFMM. Those interested in > > automation on complex workflows with kdevops are also welcomed. This > > is best addressed informally, but since I see an open slot for at > > 10:30am for Tuesday, figured I'd check to see if we can snatch it. > > The empty slot is there for flexibility of the schedule and also > wouldn't storage/MM people be interested in kdevops? > > I've placed you session instead of the FS lightning talks on Tuesday > after Leah's FS testing session. > There are enough slots for FS lightning talks. > > There are several empty slots throughout the agenda left for > flexibility, including the one you mentioned on Tue morning. > kdevops session is for a very specialized group of developers, > so if that group is assembled and decides to use an earlier slot > we can do that on the spot. kdevops could be *extrememly* useful to understand better (and to share "best practices" and ideas on testing from various filesystems) I would be very happy if there were an easy way to do three things faster/easier: 1) make it easier to run a reasonably large set of fs tests automatically on checkin of a commit or set of commits (e.g. to an externally visible github branch) before it goes in linux-next, and a larger set of test automation that is automatically run on P/Rs (I kick these tests off semi-manually for cifs.ko and ksmbd.ko today) 2) make it easier as a maintainer to get reports of automated testing of stable-rc (or automate running of tests against stable-rc by filesystem type and send failures to the specific fs's mailing list). Make the tests run for a particular fs more visible, so maintainers/contributors can note where important tests are left out against a particular fs 3) make it easier to auto-bisect what commit regressed when a failing test is spotted 4) make it easier to automatically enable certain fs specific debug tooling (e.g. eBPF scripts or trace points or log capturing) when a test fails, or when a test fails - enable tracing and restart tests 5) make it easier to collect log output at the end of each test to catch "suspicious" things (like network reconnects/timeouts, dmesg events logged, fs specific stats or debug data that show excessive failures or slow responses) 6) an easy way to tell if a kdevops run is "suspiciously slow" (ie if a test or set of tests is more than 20% slower than the baseline test run, it could indicate a performance regression that needs to be bisected and identified) -- Thanks, Steve