From: Dan Williams <dan.j.williams@xxxxxxxxx> 1/ If a mapping overlaps a bad sector fail the request. 2/ Do not opportunistically report more dax-capable capacity than is requested when errors present. [vishal: fix a conflict with system RAM collision patches] Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> --- block/ioctl.c | 9 --------- drivers/nvdimm/pmem.c | 8 ++++++++ 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/block/ioctl.c b/block/ioctl.c index 4ff1f92..bf80bfd 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -423,15 +423,6 @@ bool blkdev_dax_capable(struct block_device *bdev) || (bdev->bd_part->nr_sects % (PAGE_SIZE / 512))) return false; - /* - * If the device has known bad blocks, force all I/O through the - * driver / page cache. - * - * TODO: support finer grained dax error handling - */ - if (disk->bb && disk->bb->count) - return false; - return true; } #endif diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index f72733c..4567d9a 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -188,9 +188,17 @@ static long pmem_direct_access(struct block_device *bdev, struct pmem_device *pmem = bdev->bd_disk->private_data; resource_size_t offset = sector * 512 + pmem->data_offset; + if (unlikely(is_bad_pmem(&pmem->bb, sector, dax->size))) + return -EIO; dax->addr = pmem->virt_addr + offset; dax->pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); + /* + * If badblocks are present, limit known good range to the + * requested range. + */ + if (unlikely(pmem->bb.count)) + return dax->size; return pmem->size - pmem->pfn_pad - offset; } -- 2.5.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>