On Thu, Nov 24, 2016 at 10:53:24AM +1100, Dave Chinner wrote: > On Wed, Nov 23, 2016 at 06:14:47PM -0500, Zygo Blaxell wrote: > > On Thu, Nov 24, 2016 at 09:13:28AM +1100, Dave Chinner wrote: > > > On Wed, Nov 23, 2016 at 08:55:59AM -0500, Zygo Blaxell wrote: > > > > On Wed, Nov 23, 2016 at 03:26:32PM +1100, Dave Chinner wrote: > > > > > On Tue, Nov 22, 2016 at 09:02:10PM -0500, Zygo Blaxell wrote: > > > > > > On Thu, Nov 17, 2016 at 04:07:48PM -0800, Omar Sandoval wrote: > > > > > > > 3. Both XFS and Btrfs cap each dedupe operation to 16MB, but the > > > > > > > implicit EOF gets around this in the existing XFS implementation. I > > > > > > > copied this for the Btrfs implementation. > > > > > > > > > > > > Somewhat tangential to this patch, but on the dedup topic: Can we raise > > > > > > or drop that 16MB limit? > > > > > > > > > > > > The maximum btrfs extent length is 128MB. Currently the btrfs dedup > > > > > > behavior for a 128MB extent is to generate 8x16MB shared extent references > > > > > > with different extent offsets to a single 128MB physical extent. > > > > > > These references no longer look like the original 128MB extent to a > > > > > > userspace dedup tool. That raises the difficulty level substantially > > > > > > for a userspace dedup tool when it tries to figure out which extents to > > > > > > keep and which to discard or rewrite. > > > > > > > > > > That, IMO, is a btrfs design/implementation problem, not a problem > > > > > with the API. Applications are always going to end up doing things > > > > > that aren't perfectly aligned to extent boundaries or sizes > > > > > regardless of the size limit that is placed on the dedupe ranges. > > > > > > > > Given that XFS doesn't have all the problems btrfs does, why does XFS > > > > have the same aribitrary size limit? Especially since XFS demonstrably > > > > doesn't need it? > > > > > > Creating a new-but-slightly-incompatible jsut for XFS makes no > > > sense - we have multiple filesystems that support this functionality > > > and so they all should use the same APIs and present (as far as is > > > possible) the same behaviour to userspace. > > > > OK. Let's just remove the limit on all the filesystems then. > > XFS doesn't need it, and btrfs can be fixed. > > Yet applications still have to support kernel versions where btrfs > has a limit. IOWs, we can remove the limit for future improvement, > but that doesn't mean userspace is free from having to know about > the existing limit constraints. Keep in mind that the number of bytes deduped is returned to userspace via file_dedupe_range.info[x].bytes_deduped, so a properly functioning userspace program actually /can/ detect that its 128MB request got cut down to only 16MB and re-issue the request with the offsets moved up by 16MB. The dedupe client in xfs_io (see dedupe_ioctl() in io/reflink.c) implements this strategy. duperemove (the only other user I know of) also does this. So it's really no big deal to increase the limit beyond 16MB, eliminate it entirely, or even change it to cap the total request size while dropping the per-item IO limit. As I mentioned in my other reply, the only hesitation I have for not killing XFS_MAX_DEDUPE_LEN is that I feel that 2GB is enough IO for a single ioctl call. (Dave: That said, if you want to kill it, I'm more than happy to do so for XFS and ocfs2.) --D > That is, once a behaviour has been exposed to userspace through an > API, we can't just change it and act like it was always that way - > apps still have to support kernels that expose the old behaviour. > i.e. the old behaviour is there forever, and this why designing > userspace APIs is /hard/. It's also why it's better to use an > existing, slightly less than ideal API than invent a new one that > will simply have different problems exposed in future... > > > > IOWs it's more important to use existing APIs than to invent a new > > > one that does almost the same thing. This way userspace applications > > > don't need to be changed to support new XFS functionality and we > > > make life easier for everyone. > > > > Except removing the limit doesn't work that way. An application that > > didn't impose an undocumented limit on itself wouldn't break when moved > > to a filesystem that imposed no such limit, i.e. if XFS had no limit, > > an application that moved from btrfs to XFS would just work. > > It goes /both ways/ though. Write an app on XFS that does not care > about limits and it won't work on btrfs because it gets unexpected > errors. > > Cheers, > > Dave. > -- > Dave Chinner > david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html