On Wed, Nov 23, 2016 at 05:26:18PM -0800, Darrick J. Wong wrote: [...] > Keep in mind that the number of bytes deduped is returned to userspace > via file_dedupe_range.info[x].bytes_deduped, so a properly functioning > userspace program actually /can/ detect that its 128MB request got cut > down to only 16MB and re-issue the request with the offsets moved up by > 16MB. The dedupe client in xfs_io (see dedupe_ioctl() in io/reflink.c) > implements this strategy. duperemove (the only other user I know of) > also does this. > > So it's really no big deal to increase the limit beyond 16MB, eliminate > it entirely, or even change it to cap the total request size while > dropping the per-item IO limit. > > As I mentioned in my other reply, the only hesitation I have for not > killing XFS_MAX_DEDUPE_LEN is that I feel that 2GB is enough IO for a > single ioctl call. Everything's relative. btrfs has ioctls that will do hundreds of terabytes of IO and take months to run. 2GB of data is nothing. Deduping entire 100TB files with a single ioctl call makes as much sense to me as reflink copying them with a single ioctl call. The only reason I see to keep the limit is to work around something wrong with the implementation.
Attachment:
signature.asc
Description: Digital signature