Re-add lists to CC. On Fri, Aug 28, 2020 at 06:47:06AM +0300, Amir Goldstein wrote: > On Thu, Aug 27, 2020, 5:11 PM Brian Foster <bfoster@xxxxxxxxxx> wrote: > > > On Thu, Aug 27, 2020 at 10:29:05AM +0300, Amir Goldstein wrote: > > > On Thu, Aug 27, 2020 at 10:02 AM Christoph Hellwig <hch@xxxxxxxxxxxxx> > > wrote: > > > > > > > > On Thu, Aug 27, 2020 at 09:58:09AM +0300, Amir Goldstein wrote: > > > > > I imagine that ext4 could also be burned by this. > > > > > Do we have a reason to limit this requirement to xfs? > > > > > I prefer to make it generic. > > > > > > > > The whole idea that discard zeroes data is just completely broken. > > > > Discard is a hint that is explicitly allowed to do nothing. > > > > > > I figured you'd say something like that :) > > > but since we are talking about dm-thin as a solution for predictable > > > behavior at the moment and this sanity check helps avoiding adding > > > new tests that can fail to some extent, is the proposed bandaid good > > enough > > > to keep those tests alive until a better solution is proposed? > > > > > > Another concern that I have is that perhaps adding dm-thin to the fsx > > tests > > > would change timing of io in a subtle way and hide the original bugs > > that they > > > caught: > > > > > > 47c7d0b19502 ("xfs: fix incorrect log_flushed on fsync") > > > 3af423b03435 ("xfs: evict CoW fork extents when performing > > finsert/fcollapse") > > > 51e3ae81ec58 ("ext4: fix interaction between i_size, fallocate, and > > > delalloc after a crash") > > > > > > Because at the time I (re)wrote Josef's fsx tests, they flushed out the > > bugs > > > on spinning rust much more frequently than on SSD. > > > > > > So Brian, if you could verify that the fsx tests still catch the > > original bugs > > > by reverting the fixes or running with an old kernel and probably running > > > on a slow disk that would be great. > > > > > > > I made these particular changes because it's how we previously addressed > > generic/482 and they were a pretty clear and quick way to address the > > problem. I'm pretty sure I've seen these tests reproduce legitimate > > issues with or without thin, but that's from memory so I can't confirm > > that with certainty or suggest that applies for every regression these > > have discovered in the past. > > > > If we want to go down that path, I'd say lets just assume those no > > longer reproduce. That doesn't make these tests any less broken on XFS > > (v5, at least) today, so I guess I'd drop the thin changes (note again > > that generic/482 is already using dmthinp) and just disable these tests > > on XFS until we can come up with an acceptable solution to make them > > more reliable. That's slightly unfortunate because I think the tests are > > quite effective, but we're continuing to see spurious failures that are > > not trivial to diagnose. > > > > > IIRC, I did at one point experiment with removing the (128M?) physical > > zeroing limit from the associated logwrites tool, but that somewhat > > predictably caused the test to take an extremely long time. I'm not sure > > I even allowed it to finish, tbh. Anyways, perhaps some options are to > > allow a larger physical zeroing limit and remove the tests from the auto > > group, use smaller target devices to reduce the amount of zeroing > > required, require the user to have a thinp or some such volume as a > > scratch dev already or otherwise find some other more efficient zeroing > > mechanism. > > > > > I know it is not a simple request, but I just don't know how else to > > trust > > > these changes. I guess a quick way for you to avoid the hassle is to add > > > _require_discard_zeroes (patch #1) and drop the rest. > > > > > > > That was going to be my fallback suggestion if the changes to the tests > > were problematic for whatever reason. Christoph points out that the > > discard zeroing logic is not predictable in general as it might be on > > dm-thinp, so I think for now we should probably just notrun these on > > XFS. I'll send something like that as a v2 of patch 1. > > > > Maybe there is a better way to stay safe and not sorry. > > Can we use dm thinp only for replay but not for fsx? > I suppose reliable zeroing is only needed in replay? > I think that would work, but might clutter or complicate the tests by using multiple devices for I/O vs. replay. That kind of strikes me as overkill given that two of these run multiple instances of fsx (which has a fixed size file) and the other looks like it runs a simple xfs_io command with a fixed amount of I/O. Ironically, generic/482 seems like the test with the most I/O overhead (via fsstress), but it's been using dm-thin for a while now. I think if we're really that concerned with dm-thin allocation overhead, we should either configure the cluster size to amortize the cost of it or just look into using smaller block devices so replay-log falls back to manual zeroing when discard doesn't work. A quick test of the latter doesn't show a huge increase in test runtime for 455/457, but I'd also have to confirm that this works as expected and eliminates the spurious corruption issue. Of course, a tradeoff could be that if discard does work but doesn't provide zeroing (which Christoph already explained is unpredictable), then I think we're still susceptible to the original problem. Perhaps we could create a mode that simply forces manual zeroing over discards if there isn't one already... Brian > Thanks, > Amir.