On Fri, Dec 11, 2015 at 10:11:53AM -0800, Dan Williams wrote: > The DAX implementation needs to protect new calls to ->direct_access() > and usage of its return value against the driver for the underlying > block device being disabled. Use blk_queue_enter()/blk_queue_exit() to > hold off blk_cleanup_queue() from proceeding, or otherwise fail new > mapping requests if the request_queue is being torn down. > > This also introduces blk_dax_ctl to simplify the interface from fs/dax.c > through dax_map_atomic() to bdev_direct_access(). > > Cc: Jan Kara <jack@xxxxxxxx> > Cc: Jens Axboe <axboe@xxxxxx> > Cc: Dave Chinner <david@xxxxxxxxxxxxx> > Cc: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxxxx> > [willy: fix read() of a hole] > Reviewed-by: Jeff Moyer <jmoyer@xxxxxxxxxx> > Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> <> > @@ -308,20 +351,18 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, > goto out; > } > > - error = bdev_direct_access(bh->b_bdev, sector, &addr, &pfn, bh->b_size); > - if (error < 0) > - goto out; > - if (error < PAGE_SIZE) { > - error = -EIO; > + if (dax_map_atomic(bdev, &dax) < 0) { > + error = PTR_ERR(dax.addr); > goto out; > } > > if (buffer_unwritten(bh) || buffer_new(bh)) { > - clear_pmem(addr, PAGE_SIZE); > + clear_pmem(dax.addr, PAGE_SIZE); > wmb_pmem(); > } > + dax_unmap_atomic(bdev, &dax); > > - error = vm_insert_mixed(vma, vaddr, pfn); > + error = vm_insert_mixed(vma, vaddr, dax.pfn); Since we're still using the contents of the struct blk_dax_ctl as an argument to vm_insert_mixed(), shouldn't dax_unmap_atomic() be after this call? Unless there is some reason to protect dax.addr with a blk queue reference, but not dax.pfn? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>