Re: [PATCH] iomap: Return zero in case of unsuccessful pagecache invalidation before DIO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 3, 2020 at 8:02 PM Darrick J. Wong <darrick.wong@xxxxxxxxxx> wrote:
>
> On Wed, Jun 03, 2020 at 12:32:15PM +0100, Filipe Manana wrote:
> > On Wed, Jun 3, 2020 at 12:23 PM Filipe Manana <fdmanana@xxxxxxxxx> wrote:
> > >
> > > On Mon, Jun 1, 2020 at 4:16 PM Goldwyn Rodrigues <rgoldwyn@xxxxxxx> wrote:
> > > >
> > > > On 17:23 28/05, Darrick J. Wong wrote:
> > > > > On Thu, May 28, 2020 at 02:21:03PM -0500, Goldwyn Rodrigues wrote:
> > > > > >
> > > > > > Filesystems such as btrfs are unable to guarantee page invalidation
> > > > > > because pages could be locked as a part of the extent. Return zero
> > > > >
> > > > > Locked for what?  filemap_write_and_wait_range should have just cleaned
> > > > > them off.
> > > > >
> > > > > > in case a page cache invalidation is unsuccessful so filesystems can
> > > > > > fallback to buffered I/O. This is similar to
> > > > > > generic_file_direct_write().
> > > > > >
> > > > > > This takes care of the following invalidation warning during btrfs
> > > > > > mixed buffered and direct I/O using iomap_dio_rw():
> > > > > >
> > > > > > Page cache invalidation failure on direct I/O.  Possible data
> > > > > > corruption due to collision with buffered I/O!
> > > > > >
> > > > > > Signed-off-by: Goldwyn Rodrigues <rgoldwyn@xxxxxxxx>
> > > > > >
> > > > > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> > > > > > index e4addfc58107..215315be6233 100644
> > > > > > --- a/fs/iomap/direct-io.c
> > > > > > +++ b/fs/iomap/direct-io.c
> > > > > > @@ -483,9 +483,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
> > > > > >      */
> > > > > >     ret = invalidate_inode_pages2_range(mapping,
> > > > > >                     pos >> PAGE_SHIFT, end >> PAGE_SHIFT);
> > > > > > -   if (ret)
> > > > > > -           dio_warn_stale_pagecache(iocb->ki_filp);
> > > > > > -   ret = 0;
> > > > > > +   /*
> > > > > > +    * If a page can not be invalidated, return 0 to fall back
> > > > > > +    * to buffered write.
> > > > > > +    */
> > > > > > +   if (ret) {
> > > > > > +           if (ret == -EBUSY)
> > > > > > +                   ret = 0;
> > > > > > +           goto out_free_dio;
> > > > >
> > > > > XFS doesn't fall back to buffered io when directio fails, which means
> > > > > this will cause a regression there.
> > > > >
> > > > > Granted mixing write types is bogus...
> > > > >
> > > >
> > > > I have not seen page invalidation failure errors on XFS, but what should
>
> What happens if you try to dirty an mmap page at the same time as
> issuing a directio?
>
> > > > happen hypothetically if they do occur? Carry on with the direct I/O?
> > > > Would an error return like -ENOTBLK be better?
>
> In the old days, we would only WARN when we encountered collisions like
> this.  How about only setting EIO in the mapping if we fail the
> pagecache invalidation in directio completion?  If a buffered write
> dirties the page after the direct write thread flushes the dirty pages
> but before invalidation, we can argue that we didn't lose anything; the
> direct write simply happened after the buffered write.
>
> XFS doesn't implement buffered write fallback, and it never has.  Either
> the entire directio succeeds, or it returns a negative error code.  Some
> of the iomap_dio_rw callers (ext4, jfs2) will notice a short direct
> write and try to finish the rest with buffered io, but xfs and zonefs do
> not.
>
> The net effect of this (on xfs anyway) is that when buffered and direct
> writes collide, before we'd make the buffered writer lose, now we make
> the direct writer lose.
>
> You also /could/ propose teaching xfs how to fall back to an
> invalidating synchronous buffered write like ext4 does, but that's not
> part of this patch set, and that's not a behavior I want to introduce
> suddenly during the merge window.
>
> > > It doesn't make much to me to emit the warning and then proceed to the
> > > direct IO write path anyway, as if nothing happened.
> > > If we are concerned about possible corruption, we should either return
> > > an error or fallback to buffered IO just like
> > > generic_file_direct_write() did, and not allow the possibility for
> > > corruptions.
> > >
> > > Btw, this is causing a regression in Btrfs now. The problem is that
> > > dio_warn_stale_pagecache() sets an EIO error in the inode's mapping:
> > >
> > > errseq_set(&inode->i_mapping->wb_err, -EIO);
> > >
> > > So the next fsync on the file will return that error, despite the
> > > fsync having completed successfully with any errors.
> > >
> > > Since patchset to make btrfs direct IO use iomap is already in Linus'
> > > tree, we need to fix this somehow.
>
> Y'all /just/ sent the pull request containing that conversion 2 days
> ago.  Why did you move forward with that when you knew there were
> unresolved fstests failures?
>
> > > This makes generic/547 fail often for example - buffered write against
> > > file + direct IO write + fsync - the later returns -EIO.
> >
> > Just to make it clear, despite the -EIO error, there was actually no
> > data loss or corruption (generic/547 checks that),
> > since the direct IO write path in btrfs figures out there's a buffered
> > write still ongoing and waits for it to complete before proceeding
> > with the dio write.
> >
> > Nevertheless, it's still a regression, -EIO shouldn't be returned as
> > everything went fine.
>
> Now I'm annoyed because I feel like you're trying to strong-arm me into
> making last minute changes to iomap when you could have held off for
> another cycle.

If you are talking to me, I'm not trying to strong-arm anyone nor
point a fingers.
I'm just reporting a problem that I found earlier today while testing
some work I was doing.

Thanks.

>
> While I'm pretty sure your analysis is correct that we could /probably/
> get away with only setting EIO if we can't invalidate the cache after
> we've already written the disk blocks directly (because then we really
> did lose something), I /really/ want these kinds of behavioral changes
> to shared code to soak in for-next for long enough that we can work out
> the kinks without exposing the upstream kernel to unnecessary risk.
>
> This conversation would be /much/ different had you not just merged the
> btrfs directio iomap conversion yesterday.
>
> --D
>
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > --
> > > > Goldwyn
> > >
> > >
> > >
> > > --
> > > Filipe David Manana,
> > >
> > > “Whether you think you can, or you think you can't — you're right.”
> >
> >
> >
> > --
> > Filipe David Manana,
> >
> > “Whether you think you can, or you think you can't — you're right.”



-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux