On Tue, Feb 05, 2019 at 11:05:59PM -0500, Sasha Levin wrote: > On Wed, Feb 06, 2019 at 09:06:55AM +1100, Dave Chinner wrote: > >On Mon, Feb 04, 2019 at 08:54:17AM -0800, Luis Chamberlain wrote: > >>Kernel stable team, > >> > >>here is a v2 respin of my XFS stable patches for v4.19.y. The only > >>change in this series is adding the upstream commit to the commit log, > >>and I've now also Cc'd stable@xxxxxxxxxxxxxxx as well. No other issues > >>were spotted or raised with this series. > >> > >>Reviews, questions, or rants are greatly appreciated. > > > >Test results? > > > >The set of changes look fine themselves, but as always, the proof is > >in the testing... > > Luis noted on v1 that it passes through his oscheck test suite, and I > noted that I haven't seen any regression with the xfstests scripts I > have. > > What sort of data are you looking for beyond "we didn't see a > regression"? Nothing special, just a summary of what was tested so we have some visibility of whether the testing covered the proposed changes sufficiently. i.e. something like: Patchset was run through ltp and the fstests "auto" group with the following configs: - mkfs/mount defaults - -m reflink=1,rmapbt=1 - -b size=1k - -m crc=0 .... No new regressions were reported. Really, all I'm looking for is a bit more context for the review process - nobody remembers what configs other people test. However, it's important in reviewing a backport to know whether a backport to a fix, say, a bug in the rmap code actually got exercised by the tests on an rmap enabled filesystem... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx