Re: [PATCH 1/1] xfs: use reflink to assist unaligned copy_file_range calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Dec 06, 2020 at 03:24:54PM -0800, Darrick J. Wong wrote:
> On Tue, Dec 01, 2020 at 10:25:48AM -0500, Brian Foster wrote:
> > On Mon, Nov 30, 2020 at 07:37:16PM -0800, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > > 
> > > Add a copy_file_range handler to XFS so that we can accelerate file
> > > copies with reflink when the source and destination ranges are not
> > > block-aligned.  We'll use the generic pagecache copy to handle the
> > > unaligned edges and attempt to reflink the middle.
> > > 
> > > Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > > ---
> > >  fs/xfs/xfs_file.c |   99 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 99 insertions(+)
> > > 
> > > 
> > > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> > > index 5b0f93f73837..9d1bb0dc30e2 100644
> > > --- a/fs/xfs/xfs_file.c
> > > +++ b/fs/xfs/xfs_file.c
> > > @@ -1119,6 +1119,104 @@ xfs_file_remap_range(
> > >  	return remapped > 0 ? remapped : ret;
> > >  }
> > >  
> > ...
> > > +STATIC ssize_t
> > > +xfs_file_copy_range(
> > > +	struct file		*src_file,
> > > +	loff_t			src_off,
> > > +	struct file		*dst_file,
> > > +	loff_t			dst_off,
> > > +	size_t			len,
> > > +	unsigned int		flags)
> > > +{
> > > +	struct inode		*inode_src = file_inode(src_file);
> > > +	struct xfs_inode	*src = XFS_I(inode_src);
> > > +	struct inode		*inode_dst = file_inode(dst_file);
> > > +	struct xfs_inode	*dst = XFS_I(inode_dst);
> > > +	struct xfs_mount	*mp = src->i_mount;
> > > +	loff_t			copy_ret;
> > > +	loff_t			next_block;
> > > +	size_t			copy_len;
> > > +	ssize_t			total_copied = 0;
> > > +
> > > +	/* Bypass all this if no copy acceleration is possible. */
> > > +	if (!xfs_want_reflink_copy_range(src, src_off, dst, dst_off, len))
> > > +		goto use_generic;
> > > +
> > > +	/* Use the regular copy until we're block aligned at the start. */
> > > +	next_block = round_up(src_off + 1, mp->m_sb.sb_blocksize);
> > 
> > Why the +1? AFAICT this means we manually copy the first block if
> > src_off does happen to be block aligned. Is this an assumption based on
> > the caller attempting ->remap_file_range() first?
> 
> Yes.  The VFS always tries that first.
> 
> > BTW, if we do happen to be called in some (theoretical) corner case
> > where remap doesn't work unrelated to alignment, it seems this would
> > unconditionally break the manual copy into multiple parts (first block +
> > the rest). It's not immediately clear to me if that's significant from a
> > performance perspective,
> 
> I doubt it, since that's usually just copying around the pagecache.
> 

Ok, comment please.

> > but I wonder if it would be nicer here to
> > filter that out more explicitly. For example, run the remap checks on
> > the block aligned offset/len first, or skip the remap if the caller has
> > provided a block aligned start (i.e. hinting that remap failed for other
> > reasons),
> 
> Yes, checking the block alignment is a good suggestion.  Will fix.
> 
> > or perhaps even implement this so it conditionally performs a
> > short manual copy so the next retry would fall into ->remap_file_range()
> > with aligned offsets, etc.
> 
> Hm.  That could be a thing too, though my opinion is that we should make
> as much progress as we can before exiting the kernel.
> 

Yeah, the more I thought about this the more it seemed like a hack and
not really sane for a production system.

Brian

> --D
> 
> > Thoughts?
> > 
> > > +	copy_len = min_t(size_t, len, next_block - src_off);
> > > +	if (copy_len > 0) {
> > > +		copy_ret = generic_copy_file_range(src_file, src_off, dst_file,
> > > +					dst_off, copy_len, flags);
> > > +		if (copy_ret < 0)
> > > +			return copy_ret;
> > > +
> > > +		src_off += copy_ret;
> > > +		dst_off += copy_ret;
> > > +		len -= copy_ret;
> > > +		total_copied += copy_ret;
> > > +		if (copy_ret < copy_len || len == 0)
> > > +			return total_copied;
> > > +	}
> > > +
> > > +	/*
> > > +	 * Now try to reflink as many full blocks as we can.  If the end of the
> > > +	 * copy request wasn't block-aligned or the reflink fails, we'll just
> > > +	 * fall into the generic copy to do the rest.
> > > +	 */
> > > +	copy_len = round_down(len, mp->m_sb.sb_blocksize);
> > > +	if (copy_len > 0) {
> > > +		copy_ret = xfs_file_remap_range(src_file, src_off, dst_file,
> > > +				dst_off, copy_len, REMAP_FILE_CAN_SHORTEN);
> > > +		if (copy_ret >= 0) {
> > > +			src_off += copy_ret;
> > > +			dst_off += copy_ret;
> > > +			len -= copy_ret;
> > > +			total_copied += copy_ret;
> > > +			if (copy_ret < copy_len || len == 0)
> > > +				return total_copied;
> > 
> > Any reason we return a potential short copy here, but fall into the
> > manual copy if the reflink outright fails?
> > 
> > > +		}
> > > +	}
> > > +
> > > +use_generic:
> > > +	/* Use the regular copy to deal with leftover bytes. */
> > > +	copy_ret = generic_copy_file_range(src_file, src_off, dst_file,
> > > +			dst_off, len, flags);
> > > +	if (copy_ret < 0)
> > > +		return copy_ret;
> > 
> > Perhaps this should also check/return total_copied in the event we've
> > already done some work..?
> > 
> > Brian
> > 
> > > +	return total_copied + copy_ret;
> > > +}
> > > +
> > >  STATIC int
> > >  xfs_file_open(
> > >  	struct inode	*inode,
> > > @@ -1381,6 +1479,7 @@ const struct file_operations xfs_file_operations = {
> > >  	.get_unmapped_area = thp_get_unmapped_area,
> > >  	.fallocate	= xfs_file_fallocate,
> > >  	.fadvise	= xfs_file_fadvise,
> > > +	.copy_file_range = xfs_file_copy_range,
> > >  	.remap_file_range = xfs_file_remap_range,
> > >  };
> > >  
> > > 
> > 
> 




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux