Re: [PATCH v1 9/8] copy_file_range.2: New page documenting copy_file_range()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 09, 2015 at 10:17:57AM -0700, Darrick J. Wong wrote:
> I noticed that btrfs won't dedupe more than 16M per call.  Any thoughts?

btrfs_ioctl_file_extent_same:

3138         /*
3139          * Limit the total length we will dedupe for each operation.
3140          * This is intended to bound the total time spent in this
3141          * ioctl to something sane.
3142          */
3143         if (len > BTRFS_MAX_DEDUPE_LEN)
3144                 len = BTRFS_MAX_DEDUPE_LEN;

The deduplication compares the source and destination blocks and does
not use the checksum based approach (btrfs_cmp_data()). The 16M limit is
artifical, I don't have an estimate whether the value is ok or not. The
longer dedupe chunk the lower the chance to find more matching extents,
so the practially used chunk sizes are in range of hundreds of
kilobytes. But this obviously depends on data and many-megabyte-sized
chunks could fit some usecases easily.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux