On Wed, Sep 09, 2015 at 10:17:57AM -0700, Darrick J. Wong wrote: > I noticed that btrfs won't dedupe more than 16M per call. Any thoughts? btrfs_ioctl_file_extent_same: 3138 /* 3139 * Limit the total length we will dedupe for each operation. 3140 * This is intended to bound the total time spent in this 3141 * ioctl to something sane. 3142 */ 3143 if (len > BTRFS_MAX_DEDUPE_LEN) 3144 len = BTRFS_MAX_DEDUPE_LEN; The deduplication compares the source and destination blocks and does not use the checksum based approach (btrfs_cmp_data()). The 16M limit is artifical, I don't have an estimate whether the value is ok or not. The longer dedupe chunk the lower the chance to find more matching extents, so the practially used chunk sizes are in range of hundreds of kilobytes. But this obviously depends on data and many-megabyte-sized chunks could fit some usecases easily. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html