Re: [Reproducer] Corruption, possible race between splice and FALLOC_FL_PUNCH_HOLE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 27, 2023 at 11:31:56AM +0100, Matthew Wilcox wrote:
> On Tue, Jun 27, 2023 at 03:47:57PM +1000, Dave Chinner wrote:
> > On Mon, Jun 26, 2023 at 09:12:52PM -0400, Matt Whitlock wrote:
> > > Hello, all. I am experiencing a data corruption issue on Linux 6.1.24 when
> > > calling fallocate with FALLOC_FL_PUNCH_HOLE to punch out pages that have
> > > just been spliced into a pipe. It appears that the fallocate call can zero
> > > out the pages that are sitting in the pipe buffer, before those pages are
> > > read from the pipe.
> > > 
> > > Simplified code excerpt (eliding error checking):
> > > 
> > > int fd = /* open file descriptor referring to some disk file */;
> > > for (off_t consumed = 0;;) {
> > >   ssize_t n = splice(fd, NULL, STDOUT_FILENO, NULL, SIZE_MAX, 0);
> > >   if (n <= 0) break;
> > >   consumed += n;
> > >   fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, consumed);
> > > }
> > 
> > Huh. Never seen that pattern before - what are you trying to
> > implement with this?
> > 
> > > Expected behavior:
> > > Punching holes in a file after splicing pages out of that file into a pipe
> > > should not corrupt the spliced-out pages in the pipe buffer.
> > 
> > splice is a nasty, tricky beast that should never have been
> > inflicted on the world...
> 
> Indeed.  I understand the problem, I just don't know if it's a bug.
> 
> > > Observed behavior:
> > > Some of the pages that have been spliced into the pipe get zeroed out by the
> > > subsequent fallocate call before they can be consumed from the read side of
> > > the pipe.
> > 
> > Which implies the splice is not copying the page cache pages but
> > simply taking a reference to them.
> 
> Yup.
> 
> > Hmmm. the corruption, more often than not, starts on a high-order
> > aligned file offset. Tracing indicates data is being populated in
> > the page cache by readahead, which would be using high-order folios
> > in XFS.
> > 
> > All the splice operations are return byte counts that are 4kB
> > aligned, so punch is doing filesystem block aligned punches. The
> > extent freeing traces indicate the filesystem is removing exactly
> > the right ranges from the file, and so the page cache invalidation
> > calls it is doing are also going to be for the correct ranges.
> > 
> > This smells of a partial high-order folio invalidation problem,
> > or at least a problem with splice working on pages rather than
> > folios the two not being properly coherent as a result of partial
> > folio invalidation.
> > 
> > To confirm, I removed all the mapping_set_large_folios() calls in
> > XFS, and the data corruption goes away. Hence, at minimum, large
> > folios look like a trigger for the problem.
> 
> If you do a PUNCH HOLE, documented behaviour is:
> 
>        Specifying the FALLOC_FL_PUNCH_HOLE flag (available since Linux 2.6.38)
>        in mode deallocates space (i.e., creates a  hole)  in  the  byte  range
>        starting  at offset and continuing for len bytes.  Within the specified
>        range, partial filesystem  blocks  are  zeroed,  and  whole  filesystem
>        blocks  are removed from the file.  After a successful call, subsequent
>        reads from this range will return zeros.
> 
> So we have, let's say, an order-4 folio and the user tries to PUNCH_HOLE
> page 3 of it.  We try to split it, but that fails because the pipe holds
> a reference.  The filesystem has removed the underlying data from the
> storage medium.  What is the page cache to do?  It must memset() so that
> subsequent reads return zeroes.  And now the page in the pipe has the
> hole punched into it.

Ok, that's what I suspected.

> I think you can reproduce this problem without large folios by using a
> 512-byte block size filesystem and punching holes that are sub page
> size.  The page cache must behave similarly.

Not on XFS. See xfs_flush_unmap_range(), that is run on fallocate
ranges before we do the operation:

	rounding = max_t(xfs_off_t, mp->m_sb.sb_blocksize, PAGE_SIZE);
	start = round_down(offset, rounding);
	end = round_up(offset + len, rounding) - 1;
	....
	truncate_pagecache_range(inode, start, end);

The invalidation rounded to the larger of the PAGE_SIZE or
filesystem block size, so that we, at minimum, invalidate entire
pages in the page cache. The block size case is for doing the right
thing with block size > page size.

Hence XFS will not do sub-page invalidations and so avoids touching
the contents of the page in this case. However, with large folios,
we cannot invalidate entire objects in the page cache like this any
more, so invalidation touches the page contents and that shows up in
the pages that are held in the pipe...

> Perhaps the problem is that splice() appears to copy, but really just
> takes the reference.  Perhaps splice needs to actually copy if it
> sees a multi-page folio and isn't going to take all of it.  I'm not
> an expert in splice-ology, so let's cc some people who know more about
> splice than I do.

Yup, that's pretty much my conclusion - if the destination is page
based, we copy the data. If the destination is a pipe, we simply
take references to the source pages instead of copying the data -
see my followup email.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux