On Thu, Sep 29, 2022 at 11:44:49AM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > Every now and then I see this crash on arm64: .... > This crash is a result of iomap_writepage_map encountering some sort of > error during writeback and wanting to set that error code in the file > mapping so that fsync will report it. Unfortunately, the code > dereferences folio->mapping after unlocking the folio, which means that > another thread could have removed the page from the page cache > (writeback doesn't hold the invalidation lock) and give it to somebody > else. > > At best we crash the system like above; at worst, we corrupt memory or > set an error on some other unsuspecting file while failing to record the > problems with *this* file. Regardless, fix the problem by reporting the > error to the inode mapping. > > NOTE: Commit 598ecfbaa742 lifted the XFS writeback code to iomap, so > this fix should be backported to XFS in the 4.6-5.4 kernels in addition > to iomap in the 5.5-5.19 kernels. > > Fixes: e735c0079465 ("iomap: Convert iomap_add_to_ioend() to take a folio") > Probably-Fixes: 598ecfbaa742 ("iomap: lift the xfs writeback code to iomap") > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > fs/iomap/buffered-io.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index ca5c62901541..77d59c159248 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -1421,7 +1421,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, > if (!count) > folio_end_writeback(folio); > done: > - mapping_set_error(folio->mapping, error); > + mapping_set_error(inode->i_mapping, error); > return error; Looks good, though it might be worth putting a comment somewhere in this code to indicate that it's not safe to trust the folio contents after we've submitted the bio, unlocked it without setting writeback, or ended writeback on an unlocked folio... Regardless, Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> -- Dave Chinner david@xxxxxxxxxxxxx