On Tue, Apr 03 2018 at 2:24pm -0400, Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > On Fri, Mar 30, 2018 at 9:03 PM, Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > > In preparation for allowing filesystems to augment the dev_pagemap > > associated with a dax_device, add an ->fs_claim() callback. The > > ->fs_claim() callback is leveraged by the device-mapper dax > > implementation to iterate all member devices in the map and repeat the > > claim operation across the array. > > > > In order to resolve collisions between filesystem operations and DMA to > > DAX mapped pages we need a callback when DMA completes. With a callback > > we can hold off filesystem operations while DMA is in-flight and then > > resume those operations when the last put_page() occurs on a DMA page. > > The ->fs_claim() operation arranges for this callback to be registered, > > although that implementation is saved for a later patch. > > > > Cc: Alasdair Kergon <agk@xxxxxxxxxx> > > Cc: Mike Snitzer <snitzer@xxxxxxxxxx> > > Mike, do these DM touches look ok to you? We need these ->fs_claim() > / ->fs_release() interfaces for device-mapper to set up filesystem-dax > infrastructure on all sub-devices whenever a dax-capable DM device is > mounted. It builds on the device-mapper dax dependency removal > patches. I'd prefer dm_dax_iterate() be renamed to dm_dax_iterate_devices() But dm_dax_iterate() is weird... it is simply returning the struct dax_device *dax_dev that is passed: seemingly without actually directly changing anything about that dax_device (I can infer that you're claiming the underlying devices, but...) In general user's of ti->type->iterate_devices can get a result back (via 'int' return).. you aren't using it that way (and maybe dax will never have a need to return an answer). But all said, I think I'd prefer to see dm_dax_iterate_devices() return void. But please let me know if I'm missing something, thanks. Mike -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html