Re: [PATCH 5/9] xfs: buffered write failure should not truncate the page cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 15, 2022 at 04:48:58PM -0800, Darrick J. Wong wrote:
> On Tue, Nov 15, 2022 at 12:30:39PM +1100, Dave Chinner wrote:
> > +/*
> > + * Scan the data range passed to us for dirty page cache folios. If we find a
> > + * dirty folio, punch out the preceeding range and update the offset from which
> > + * the next punch will start from.
> > + *
> > + * We can punch out clean pages because they either contain data that has been
> > + * written back - in which case the delalloc punch over that range is a no-op -
> > + * or they have been read faults in which case they contain zeroes and we can
> > + * remove the delalloc backing range and any new writes to those pages will do
> > + * the normal hole filling operation...
> > + *
> > + * This makes the logic simple: we only need to keep the delalloc extents only
> > + * over the dirty ranges of the page cache.
> > + */
> > +static int
> > +xfs_buffered_write_delalloc_scan(
> > +	struct inode		*inode,
> > +	loff_t			*punch_start_byte,
> > +	loff_t			start_byte,
> > +	loff_t			end_byte)
> > +{
> > +	loff_t			offset = start_byte;
> > +
> > +	while (offset < end_byte) {
> > +		struct folio	*folio;
> > +
> > +		/* grab locked page */
> > +		folio = filemap_lock_folio(inode->i_mapping, offset >> PAGE_SHIFT);
> > +		if (!folio) {
> > +			offset = ALIGN_DOWN(offset, PAGE_SIZE) + PAGE_SIZE;
> > +			continue;
> > +		}
> > +
> > +		/* if dirty, punch up to offset */
> > +		if (folio_test_dirty(folio)) {
> > +			if (offset > *punch_start_byte) {
> > +				int	error;
> > +
> > +				error = xfs_buffered_write_delalloc_punch(inode,
> > +						*punch_start_byte, offset);
> 
> This sounds an awful lot like what iomap_writeback_ops.discard_folio()
> does, albeit without the xfs_alert screaming everywhere.

similar, but .discard_folio() is trashing uncommitted written data,
whilst this loop is explicitly preserving uncommitted written
data....

> Moving along... so we punch out delalloc reservations for any part of
> the page cache that is clean.  "punch_start_byte" is the start pos of
> the last range of clean cache, and we want to punch up to the start of
> this dirty folio...
> 
> > +				if (error) {
> > +					folio_unlock(folio);
> > +					folio_put(folio);
> > +					return error;
> > +				}
> > +			}
> > +
> > +			/*
> > +			 * Make sure the next punch start is correctly bound to
> > +			 * the end of this data range, not the end of the folio.
> > +			 */
> > +			*punch_start_byte = min_t(loff_t, end_byte,
> > +					folio_next_index(folio) << PAGE_SHIFT);
> 
> ...and then start a new clean range after this folio (or at the end_byte
> to signal that we're done)...

Yes.

> > +	filemap_invalidate_lock(inode->i_mapping);
> > +	while (start_byte < end_byte) {
> > +		loff_t		data_end;
> > +
> > +		start_byte = mapping_seek_hole_data(inode->i_mapping,
> > +				start_byte, end_byte, SEEK_DATA);
> > +		/*
> > +		 * If there is no more data to scan, all that is left is to
> > +		 * punch out the remaining range.
> > +		 */
> > +		if (start_byte == -ENXIO || start_byte == end_byte)
> > +			break;
> > +		if (start_byte < 0) {
> > +			error = start_byte;
> > +			goto out_unlock;
> > +		}
> > +		ASSERT(start_byte >= punch_start_byte);
> > +		ASSERT(start_byte < end_byte);
> > +
> > +		/*
> > +		 * We find the end of this contiguous cached data range by
> > +		 * seeking from start_byte to the beginning of the next hole.
> > +		 */
> > +		data_end = mapping_seek_hole_data(inode->i_mapping, start_byte,
> > +				end_byte, SEEK_HOLE);
> > +		if (data_end < 0) {
> > +			error = data_end;
> > +			goto out_unlock;
> > +		}
> > +		ASSERT(data_end > start_byte);
> > +		ASSERT(data_end <= end_byte);
> > +
> > +		error = xfs_buffered_write_delalloc_scan(inode,
> > +				&punch_start_byte, start_byte, data_end);
> 
> ...and we use SEEK_HOLE/SEEK_DATA to find the ranges of pagecache where
> there's even going to be folios mapped.  But in structuring the code
> like this, xfs now has to know details about folio state again, and the
> point of iomap/buffered-io.c is to delegate handling of the pagecache
> away from XFS, at least for filesystems that want to manage buffered IO
> the same way XFS does.
> 
> IOWs, I agree with Christoph that these two functions that compute the
> ranges that need delalloc-punching really ought to be in the iomap code.

Which I've already done.

> TBH I wonder if this isn't better suited for being called by
> iomap_write_iter after a short write on an IOMAP_F_NEW iomap, with an
> function pointer in iomap page_ops for iomap to tell xfs to drop the
> delalloc reservations.

IIRC, the whole point of the iomap_begin()/iomap_end() operation
pairs is that the iter functions don't need to concern themselves
with the filesystem operations needed to manipulate extents and
clean up filesystem state as a result of failed operations.

I'm pretty sure we need this same error handling for zeroing and
unshare operations, because they could also detect stale cached
iomaps and have to go remap their extents. Maybe they don't allocate
new extents, but that is beside the point - the error handling that
is necessary is common to multiple functions, and right now they
don't have to care about cleaning up iomap/filesystem state when
short writes/errors occur.

Hence I don't think we want to break the architectural layering by
doing this - I think it would just lead to having to handle the same
issues in multiple places, and we don't gain any extra control over
or information about how to perform the cleanup of the unused
portion of the iomap....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux