On Fri, Nov 05, 2021 at 07:16:38PM -0600, Jane Chu wrote: > static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, > void *addr, size_t bytes, struct iov_iter *i, int mode) > { > + phys_addr_t pmem_off; > + size_t len, lead_off; > + struct pmem_device *pmem = dax_get_private(dax_dev); > + struct device *dev = pmem->bb.dev; > + > + if (unlikely(mode == DAX_OP_RECOVERY)) { > + lead_off = (unsigned long)addr & ~PAGE_MASK; > + len = PFN_PHYS(PFN_UP(lead_off + bytes)); > + if (is_bad_pmem(&pmem->bb, PFN_PHYS(pgoff) / 512, len)) { > + if (lead_off || !(PAGE_ALIGNED(bytes))) { > + dev_warn(dev, "Found poison, but addr(%p) and/or bytes(%#lx) not page aligned\n", > + addr, bytes); > + return (size_t) -EIO; > + } > + pmem_off = PFN_PHYS(pgoff) + pmem->data_offset; > + if (pmem_clear_poison(pmem, pmem_off, bytes) != > + BLK_STS_OK) > + return (size_t) -EIO; > + } > + } This is in the wrong spot. As seen in my WIP series individual drivers really should not hook into copying to and from the iter, because it really is just one way to write to a nvdimm. How would dm-writecache clear the errors with this scheme? So IMHO going back to the separate recovery method as in your previous patch really is the way to go. If/when the 64-bit store happens we need to figure out a good way to clear the bad block list for that.