On Thu, Jul 18, 2024 at 09:02:02AM -0700, Darrick J. Wong wrote: > On Thu, Jul 18, 2024 at 11:36:13AM -0400, Josef Bacik wrote: > > On Thu, Jul 18, 2024 at 09:02:08AM -0400, Brian Foster wrote: > > > Hi all, > > > > > > This is a stab at fixing the iomap zero range problem where it doesn't > > > correctly handle the case of an unwritten mapping with dirty pagecache. > > > The gist is that we scan the mapping for dirty cache, zero any > > > already-dirty folios via buffered writes as normal, but then otherwise > > > skip clean ranges once we have a chance to validate those ranges against > > > races with writeback or reclaim. > > > > > > This is somewhat simplistic in terms of how it scans, but that is > > > intentional based on the existing use cases for zero range. From poking > > > around a bit, my current sense is that there isn't any user of zero > > > range that would ever expect to see more than a single dirty folio. Most > > > callers either straddle the EOF folio or flush in higher level code for > > > presumably (fs) context specific reasons. If somebody has an example to > > > the contrary, please let me know because I'd love to be able to use it > > > for testing. > > > > > > The caveat to this approach is that it only works for filesystems that > > > implement folio_ops->iomap_valid(), which is currently just XFS. GFS2 > > > doesn't use ->iomap_valid() and does call zero range, but AFAICT it > > > doesn't actually export unwritten mappings so I suspect this is not a > > > problem. My understanding is that ext4 iomap support is in progress, but > > > I've not yet dug into what that looks like (though I suspect similar to > > > XFS). The concern is mainly that this leaves a landmine for fs that > > > might grow support for unwritten mappings && zero range but not > > > ->iomap_valid(). We'd likely never know zero range was broken for such > > > fs until stale data exposure problems start to materialize. > > > > > > I considered adding a fallback to just add a flush at the top of > > > iomap_zero_range() so at least all future users would be correct, but I > > > wanted to gate that on the absence of ->iomap_valid() and folio_ops > > > isn't provided until iomap_begin() time. I suppose another way around > > > that could be to add a flags param to iomap_zero_range() where the > > > caller could explicitly opt out of a flush, but that's still kind of > > > ugly. I dunno, maybe better than nothing..? > > Or move ->iomap_valid to the iomap ops structure. It's a mapping > predicate, and has nothing to do with folios. > Good idea. That might be an option. > > > So IMO, this raises the question of whether this is just unnecessarily > > > overcomplicated. The KISS principle implies that it would also be > > > perfectly fine to do a conditional "flush and stale" in zero range > > > whenever we see the combination of an unwritten mapping and dirty > > > pagecache (the latter checked before or during ->iomap_begin()). That's > > > simple to implement and AFAICT would work/perform adequately and > > > generically for all filesystems. I have one or two prototypes of this > > > sort of thing if folks want to see it as an alternative. > > I wouldn't mind seeing such a prototype. Start by hoisting the > filemap_write_and_wait_range call to iomap, then adjust it only to do > that if there's dirty pagecache + unwritten mappings? Then get more > complicated from there, and we can decide if we want the increasing > levels of trickiness. > Yeah, exactly. Start with an unconditional flush at the top of iomap_zero_range() (which perhaps also serves as a -stable fix), then replace it with an unconditional dirty cache check and a conditional flush/stale down in zero_iter() (for the dirty+unwritten case). With that false positives from the cache check are less of an issue because the only consequence is basically just a spurious flush. From there, the revalidation approach could be an optional further optimization to avoid the flush entirely, but we'll have to see if it's worth the complexity. I have various experimental patches around that pretty much do the conditional flush thing. I just have to form it into a presentable series. > > I think this is the better approach, otherwise there's another behavior that's > > gated behind having a callback that other filesystems may not know about and > > thus have a gap. > > <nod> I think filesystems currently only need to supply an ->iomap_valid > function for pagecache operations because those are the only ones where > we have to maintain consistency between something that isn't locked when > we get the mapping, and the mapping not being locked when we lock that > first thing. I suspect they also only need to supply it if they support > unwritten extents. > > From what I can tell, the rest (e.g. directio/FIEMAP) don't care because > callers get to manage concurrency. > > *But* in general it makes sense to me that any iomap operation ought to > be able to revalidate a mapping at any time. > > > Additionally do you have a test for this stale data exposure? I think no matter > > what the solution it would be good to have a test for this so that we can make > > sure we're all doing the correct thing with zero range. Thanks, > > I was also curious about this. IIRC we have some tests for the > validiting checking itself, but I don't recall if there's a specific > regression test for the eofblock clearing. > Err.. yeah. I have some random test sequences around that reproduce some of these issues. I'll form them into an fstest to go along with this. Thank you both for the feedback. Brian > --D > > > Josef > > >