On Mon, Apr 16, 2018 at 11:58:00PM -0700, Christoph Hellwig wrote: > On Mon, Apr 16, 2018 at 10:19:18PM -0700, Darrick J. Wong wrote: > > From: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > > > > Since deduplication potentially has to read in all the pages in both > > files in order to compare the contents, cap the deduplication request > > length at MAX_RW_COUNT (roughly 2GB) so that we have /some/ upper bound > > on the request length and can't just lock up the kernel forever. Found > > by running generic/304 after commit 1ddae54555b62 ("common/rc: add > > missing 'local' keywords"). > > Looks fine. btrfs limits to 16MB, I guess we don't want to go that low? I've wondered about that -- the btrfs developers opine that you're unlikely to hit a 16+ MB extent that can be deduplicated, but seeing as the usage model seems to be "copy files onto fs, let dedupe program target files in the background", I have my doubts. Specifically, most of the usage models I envision are backup programs wanting to dedupe files that are almost but not quite the same; and VM farms. The cross-host VM migration tools I've seen tend to copy the image as fast as possible to minimize transition time and worry about whether or not there are shared blocks later. OTOH I do worry a little about the prospect of doing 4GB of IO per kernel call, so maybe this should be (MAX_RW_COUNT/2)?. --D > -- > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html