Re: [RFC v2 0/3] Turn iomap_page_ops into iomap_folio_ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 02, 2022 at 02:54:00AM +0100, Andreas Gruenbacher wrote:
> On Thu, Dec 1, 2022 at 10:30 PM Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > On Thu, Dec 01, 2022 at 07:09:54PM +0100, Andreas Gruenbacher wrote:
> > > Hi again,
> > >
> > > [Same thing, but with the patches split correctly this time.]
> > >
> > > we're seeing a race between journaled data writes and the shrinker on
> > > gfs2.  What's happening is that gfs2_iomap_page_done() is called after
> > > the page has been unlocked, so try_to_free_buffers() can come in and
> > > free the buffers while gfs2_iomap_page_done() is trying to add them to
> > > the transaction.  Not good.
> > >
> > > This is a proposal to change iomap_page_ops so that page_prepare()
> > > prepares the write and grabs the locked page, and page_done() unlocks
> > > and puts that page again.  While at it, this also converts the hooks
> > > from pages to folios.
> > >
> > > To move the pagecache_isize_extended() call in iomap_write_end() out of
> > > the way, a new folio_may_straddle_isize() helper is introduced that
> > > takes a locked folio.  That is then used when the inode size is updated,
> > > before the folio is unlocked.
> > >
> > > I've also converted the other applicable folio_may_straddle_isize()
> > > users, namely generic_write_end(), ext4_write_end(), and
> > > ext4_journalled_write_end().
> > >
> > > Any thoughts?
> >
> > I doubt that moving page cache operations from the iomap core to
> > filesystem specific callouts will be acceptible. I recently proposed
> > patches that added page cache walking to an XFS iomap callout to fix
> > a data corruption, but they were NAKd on the basis that iomap is
> > supposed to completely abstract away the folio and page cache
> > manipulations from the filesystem.
> 
> Right. The resulting code is really quite disgusting, for a
> fundamentalist dream of abstraction.
> 
> > This patchset seems to be doing the same thing - moving page cache
> > and folio management directly in filesystem specific callouts. Hence
> > I'm going to assume that the same architectural demarcation is
> > going to apply here, too...
> >
> > FYI, there is already significant change committed to the iomap
> > write path in the current XFS tree as a result of the changes I
> > mention - there is stale IOMAP detection which adds a new page ops
> > method and adds new error paths with a locked folio in
> > iomap_write_begin().
> 
> That would have belonged on the iomap-for-next branch rather than in
> the middle of a bunch of xfs commits.

Damned if you do, damned if you don't.

There were non-trivial cross dependencies between XFS and iomap in
that patch set.  The initial IOMAP_F_STALE infrastructure needed XFS
changes first, otherwise it could deadlock at ENOSPC on write page
faults. i.e. the iomap change in isolation broke stuff, so we're
forced to either carry XFs changes in iomap or iomap changes in XFS
so that there are no regressions in a given tree.

Then we had to move XFS functionality to iomap to fix another data
corruption that the IOMAP_F_STALE infrastructure exposed in XFS via
generic/346. Once the code was moved, then we could build it up into
the page cache scanning functionality in iomap. And only then could
we add the XFS IOMAP_F_STALE validation to XFS to solve the original
data corruption that started all this off.

IOWs, there were so many cross dependencies between XFs and iomap
that it was largely impossible to break it up into two separate sets
of indpendent patches that didn't cause regressions in one or the
other tree. And in the end, we'd still have to merge the iomap tree
into XFS or vice versa to actually test that the data corruption fix
worked.

In situations like this, we commonly take the entire series into one
of the two trees rather than make a whole lot more work for
ourselves by trying to separate them out. And in this case, because
it was XFS data corruption and race conditions that needed fixing,
it made sense to take it through the XFS tree so that it gets
coverage from all the XFS testing that happens - the iomap tree gets
a lot less early coverage than the XFS tree...

> > And this other data corruption (and performance) fix for handling
> > zeroing over unwritten extents properly:
> >
> > https://lore.kernel.org/linux-xfs/20221201005214.3836105-1-david@xxxxxxxxxxxxx/
> >
> > changes the way folios are looked up and instantiated in the page
> > cache in iomap_write_begin(). It also adds new error conditions that
> > need to be returned to callers so to implement conditional "folio
> > must be present and dirty" page cache zeroing from
> > iomap_zero_iter(). Those semantics would also have to be supported
> > by gfs2, and that greatly complicates modifying and testing iomap
> > core changes.
> >
> > To avoid all this, can we simple move the ->page_done() callout in
> > the error path and iomap_write_end() to before we unlock the folio?
> > You've already done that for pagecache_isize_extended(), and I can't
> > see anything obvious in the gfs2 ->page_done callout that
> > would cause issues if it is called with a locked dirty folio...
> 
> Yes, I guess we can do that once pagecache_isize_extended() is
> replaced by folio_may_straddle_isize().
> 
> Can people please scrutinize the math in folio_may_straddle_isize() in
> particular?

I'll look at it more closely in the next couple of days.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux