On Wed, Jan 17, 2024 at 06:19:43PM +0000, Mark Brown wrote: > On Wed, Jan 17, 2024 at 08:03:35AM -0500, James Bottomley wrote: > > > I also have to say, that for all the complaints there's just not any > > open source pull for test tools (there's no-one who's on a mission to > > make them better). Demanding that someone else do it is proof of this > > (if you cared enough you'd do it yourself). That's why all our testing > > infrastructure is just some random set of scripts that mostly does what > > I want, because it's the last thing I need to prove the thing I > > actually care about works. > > > Finally testing infrastructure is how OSDL (the precursor to the Linux > > foundation) got started and got its initial funding, so corporations > > have been putting money into it for decades with not much return (and > > pretty much nothing to show for a unified testing infrastructure ... > > ten points to the team who can actually name the test infrastructure > > OSDL produced) and have finally concluded it's not worth it, making it > > a 10x harder sell now. > > I think that's a *bit* pessimistic, at least for some areas of the > kernel - there is commercial stuff going on with kernel testing with > varying degrees of community engagement (eg, off the top of my head > Baylibre, Collabora and Linaro all have offerings of various kinds that > I'm aware of), and some of that does turn into investments in reusable > things rather than proprietary stuff. I know that I look at the > kernelci.org results for my trees, and that I've fixed issues I saw > purely in there. kselftest is noticably getting much better over time, > and LTP is quite active too. The stuff I'm aware of is more focused > around the embedded space than the enterprise/server space but it does > exist. That's not to say that this is all well resourced and there's no > problem (far from it), but it really doesn't feel like a complete dead > loss either. kselftest is pretty exciting to me; "collect all our integration tests into one place and start to standarize on running them" is good stuff. You seem to be pretty familiar with all the various testing efforts, I wonder if you could talk about what you see that's interesting and useful in the various projects? I think a lot of this stems from a lack of organization and a lack of communication; I see a lot of projects reinventing things in slightly different ways and failing to build off of each other. > Some of the issues come from the different questions that people are > trying to answer with testing, or the very different needs of the > tests that people want to run - for example one of the reasons > filesystems aren't particularly well covered for the embedded cases is > that if your local storage is SD or worse eMMC then heavy I/O suddenly > looks a lot more demanding and media durability a real consideration. Well, for filesystem testing we (mostly) don't want to be hammering on an actual block device if we can help it - there are occasionally bugs that will only manifest when you're testing on a device with realistic performance characteristics, and we definitely want to be doing some amount of performance testing on actual devices, but most of our testing is best done in a VM where the scratch devices live entirely in dram on the host. But that's a minor detail, IMO - that doesn't prevent us from having a common test runner for anything that doesn't need special hardware.