On Thu, Dec 17, 2015 at 2:00 PM, Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> wrote: > On Fri, Dec 11, 2015 at 10:11:53AM -0800, Dan Williams wrote: >> The DAX implementation needs to protect new calls to ->direct_access() >> and usage of its return value against the driver for the underlying >> block device being disabled. Use blk_queue_enter()/blk_queue_exit() to >> hold off blk_cleanup_queue() from proceeding, or otherwise fail new >> mapping requests if the request_queue is being torn down. >> >> This also introduces blk_dax_ctl to simplify the interface from fs/dax.c >> through dax_map_atomic() to bdev_direct_access(). >> >> Cc: Jan Kara <jack@xxxxxxxx> >> Cc: Jens Axboe <axboe@xxxxxx> >> Cc: Dave Chinner <david@xxxxxxxxxxxxx> >> Cc: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> >> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxxxx> >> [willy: fix read() of a hole] >> Reviewed-by: Jeff Moyer <jmoyer@xxxxxxxxxx> >> Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> > <> >> @@ -308,20 +351,18 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, >> goto out; >> } >> >> - error = bdev_direct_access(bh->b_bdev, sector, &addr, &pfn, bh->b_size); >> - if (error < 0) >> - goto out; >> - if (error < PAGE_SIZE) { >> - error = -EIO; >> + if (dax_map_atomic(bdev, &dax) < 0) { >> + error = PTR_ERR(dax.addr); >> goto out; >> } >> >> if (buffer_unwritten(bh) || buffer_new(bh)) { >> - clear_pmem(addr, PAGE_SIZE); >> + clear_pmem(dax.addr, PAGE_SIZE); >> wmb_pmem(); >> } >> + dax_unmap_atomic(bdev, &dax); >> >> - error = vm_insert_mixed(vma, vaddr, pfn); >> + error = vm_insert_mixed(vma, vaddr, dax.pfn); > > Since we're still using the contents of the struct blk_dax_ctl as an argument > to vm_insert_mixed(), shouldn't dax_unmap_atomic() be after this call? > > Unless there is some reason to protect dax.addr with a blk queue reference, > but not dax.pfn? dax_map_atomic only protects dax.addr which is valid only while the driver is active. dax.pfn is always valid as long as the memory physically exists or is known to reference the same sector of the device. After the block_device is torn down the pfn may no longer be valid (think brd with dynamically allocated pages, or hot remove of pmem). This is the rationale for this other pending patch series [1] to go shoot down all mappings to a pfn at del_gendisk() time. It relies on dax_map_atomic() to prevent any new mapping requests from succeeding after the shoot down has taken place. [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-December/003065.html -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>