On Thu, Mar 29, 2018 at 10:05:35AM +1100, Dave Chinner wrote: > On Wed, Mar 28, 2018 at 07:30:06PM +0000, Sasha Levin wrote: > > > > This is actually something I want maintainers to dictate. What sort of > > testing would make the XFS folks happy here? Right now I'm doing > > "./check 'xfs/*'" with xfstests. Is it sufficient? Anything else you'd like to see? > > ... and you're doing it wrong. This is precisely why being able > to discover /exactly/ what you are testing and being able to browse > the test results so we can find out if tests passed when a user > reports a bug on a stable kernel. > > The way you are running fstests skips more than half the test suite > It also runs tests that are considered dangerous because they are > likely to cause the test run to fail in some way (i.e. trigger an > oops, hang the machine, leave a filesystem in an unmountable state, > etc) and hence not complete a full pass. > > "./check -g auto" runs the full "expected to pass" regression test > suite for all configured test configurations. (i.e. all config > sections listed in the configs/<host>.config file) ie, it would be safer to expect that an algorithmic auto-selection process for fixes for stable kernels should have direct input and involvement from subsystems for run-time testing and simply guessing or assuming won't suffice. The days of just compile testing should be way over by now, and we should expect no less for stable kernels, *specially* if we start involving automation. Would a way to *start* to address this long term for XFS or other filesystems for auto-selection long-term be a topic worth covering / addressing at LSF/MM? Luis -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html