Re: [PATCH 3/3] xfs, iomap: ->discard_folio() is broken so remove it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 14, 2023 at 01:10:05PM -0500, Brian Foster wrote:
> On Tue, Feb 14, 2023 at 04:51:14PM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > Ever since commit e9c3a8e820ed ("iomap: don't invalidate folios
> > after writeback errors") XFS and iomap have been retaining dirty
> > folios in memory after a writeback error. XFS no longer invalidates
> > the folio, and iomap no longer clears the folio uptodate state.
> > 
> > However, iomap is still been calling ->discard_folio on error, and
> > XFS is still punching the delayed allocation range backing the dirty
> > folio.
> > 
> > This is incorrect behaviour. The folio remains dirty and up to date,
> > meaning that another writeback will be attempted in the near future.
> > THis means that XFS is still going to have to allocate space for it
> > during writeback, and that means it still needs to have a delayed
> > allocation reservation and extent backing the dirty folio.
> > 
> 
> Hmm.. I don't think that is correct. It looks like the previous patch
> removes the invalidation, but writeback clears the dirty bit before
> calling into the fs and we're not doing anything to redirty the folio,
> so there's no guarantee of subsequent writeback.

Ah, right, I got confused with iomap_do_writepage() which redirties
folios it performs no action on. The case that is being tripped here
is "count == 0" which means no action has actually been taken on the
folio and it is not submitted for writeback. We don't mark the folio
with an error on submission failure like we do for errors reported
to IO completion, so the folio is just left in it's current state
in the cache.

> Regardless, I can see how this prevents this sort of error in the
> scenario where writeback fails due to corruption, but I don't see how it
> doesn't just break error handling of writeback failures not associated
> with corruption.

What other cases in XFS do we have that cause mapping failure? We
can't get ENOSPC here because of delalloc reservations. We can't get
ENOMEM because all the memory allocations are blocking. That just
leaves IO errors reading metadata, or structure corruption when
parsing and modifying on-disk metadata.  I can't think (off the top
of my head) of any other type of error we can get returned from
allocation - what sort of non-corruption errors were you thinking
of here?

> fails due to some random/transient error, delalloc is left around on a
> !dirty page (i.e. stale), and reclaim eventually comes around and
> results in the usual block accounting corruption associated with stale
> delalloc blocks.

The first patches in the series fix those issues. If we get stray
delalloc extents on a healthy inode, then it will still trigger all
the warnings/asserts that we have now. But if the inode has been
marked sick by a corruption based allocation failure, it will clean
up in reclaim without leaking anything or throwing any new warnings.

> This is easy enough to test/reproduce (just tried it
> via error injection to delalloc conversion) that I'm kind of surprised
> fstests doesn't uncover it. :/

> > Failure to retain the delalloc extent (because xfs_discard_folio()
> > punched it out) means that the next writeback attempt does not find
> > an extent over the range of the write in ->map_blocks(), and
> > xfs_map_blocks() triggers a WARN_ON() because it should never land
> > in a hole for a data fork writeback request. This looks like:
> > 
> 
> I'm not sure this warning makes a lot of sense either given most of this
> should occur around the folio lock. Looking back at the code and the
> error report for this, the same error injection used above on a 5k write
> to a bsize=1k fs actually shows the punch remove fsb offsets 0-5 on a
> writeback failure, so it does appear to be punching too much out.  The
> cause appears to be that the end offset is calculated in
> xfs_discard_folio() by rounding up the start offset to 4k (folio size).
> If pos == 0, this results in passing end_fsb == 0 to the punch code,
> which xfs_iext_lookup_extent_before() then changes to fsb == 5 because
> that's the last block of the delalloc extent that covers fsb 0.

And that is the bug I could not see in commit 7348b322332d ("xfs:
xfs_bmap_punch_delalloc_range() should take a byte range") which is
what this warning was bisected down to. Thank you for identifying
the reason the bisect landed on that commit. Have you written a
fix to test out you reasoning that you can post?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux