On Tue, Apr 05, 2022 at 01:47:45PM -0600, Jane Chu wrote: > Introduce DAX_RECOVERY flag to dax_direct_access(). The flag is > not set by default in dax_direct_access() such that the helper > does not translate a pmem range to kernel virtual address if the > range contains uncorrectable errors. When the flag is set, > the helper ignores the UEs and return kernel virtual adderss so > that the caller may get on with data recovery via write. > > Also introduce a new dev_pagemap_ops .recovery_write function. > The function is applicable to FSDAX device only. The device > page backend driver provides .recovery_write function if the > device has underlying mechanism to clear the uncorrectable > errors on the fly. I know Dan suggested it, but I still think dev_pagemap_ops is the very wrong choice here. It is about VM callbacks to ZONE_DEVICE owners independent of what pagemap type they are. .recovery_write on the other hand is completely specific to the DAX write path and has no MM interactions at all. > /* see "strong" declaration in tools/testing/nvdimm/pmem-dax.c */ > __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, > - long nr_pages, void **kaddr, pfn_t *pfn) > + long nr_pages, int flags, void **kaddr, pfn_t *pfn) > { > resource_size_t offset = PFN_PHYS(pgoff) + pmem->data_offset; > + sector_t sector = PFN_PHYS(pgoff) >> SECTOR_SHIFT; > + unsigned int num = PFN_PHYS(nr_pages) >> SECTOR_SHIFT; > + struct badblocks *bb = &pmem->bb; > + sector_t first_bad; > + int num_bad; > + bool bad_in_range; > + long actual_nr; > + > + if (!bb->count) > + bad_in_range = false; > + else > + bad_in_range = !!badblocks_check(bb, sector, num, &first_bad, &num_bad); > > - if (unlikely(is_bad_pmem(&pmem->bb, PFN_PHYS(pgoff) / 512, > - PFN_PHYS(nr_pages)))) > + if (bad_in_range && !(flags & DAX_RECOVERY)) > return -EIO; The use of bad_in_range here seems a litle convoluted. See the attached patch on how I would structure the function to avoid the variable and have the reocvery code in a self-contained chunk. > - map_len = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), > - &kaddr, NULL); > + nrpg = PHYS_PFN(size); > + map_len = dax_direct_access(dax_dev, pgoff, nrpg, 0, &kaddr, NULL); Overly long line here.
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index b868a88a0d589..377e4d59aa90f 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -272,42 +272,40 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff, struct badblocks *bb = &pmem->bb; sector_t first_bad; int num_bad; - bool bad_in_range; - long actual_nr; - - if (!bb->count) - bad_in_range = false; - else - bad_in_range = !!badblocks_check(bb, sector, num, &first_bad, &num_bad); - - if (bad_in_range && !(flags & DAX_RECOVERY)) - return -EIO; if (kaddr) *kaddr = pmem->virt_addr + offset; if (pfn) *pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags); - if (!bad_in_range) { + if (bb->count && + badblocks_check(bb, sector, num, &first_bad, &num_bad)) { + long actual_nr; + + if (!(flags & DAX_RECOVERY)) + return -EIO; /* - * If badblock is present but not in the range, limit known good range - * to the requested range. + * Set the recovery stride to the kernel page size because the + * underlying driver and firmware clear poison functions don't + * appear to handle large chunk (such as + * 2MiB) reliably. */ - if (bb->count) - return nr_pages; - return PHYS_PFN(pmem->size - pmem->pfn_pad - offset); + actual_nr = PHYS_PFN( + PAGE_ALIGN((first_bad - sector) << SECTOR_SHIFT)); + dev_dbg(pmem->bb.dev, "start sector(%llu), nr_pages(%ld), first_bad(%llu), actual_nr(%ld)\n", + sector, nr_pages, first_bad, actual_nr); + if (actual_nr) + return actual_nr; + return 1; } /* - * In case poison is found in the given range and DAX_RECOVERY flag is set, - * recovery stride is set to kernel page size because the underlying driver and - * firmware clear poison functions don't appear to handle large chunk (such as - * 2MiB) reliably. + * If badblock is present but not in the range, limit known good range + * to the requested range. */ - actual_nr = PHYS_PFN(PAGE_ALIGN((first_bad - sector) << SECTOR_SHIFT)); - dev_dbg(pmem->bb.dev, "start sector(%llu), nr_pages(%ld), first_bad(%llu), actual_nr(%ld)\n", - sector, nr_pages, first_bad, actual_nr); - return (actual_nr == 0) ? 1 : actual_nr; + if (bb->count) + return nr_pages; + return PHYS_PFN(pmem->size - pmem->pfn_pad - offset); } static const struct block_device_operations pmem_fops = {