On Mon, Feb 24, 2020 at 1:53 PM Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: > > Dan Williams <dan.j.williams@xxxxxxxxx> writes: > > >> Let's just focus on reporting errors when we know we have them. > > > > That's the problem in my eyes. If software needs to contend with > > latent error reporting then it should always contend otherwise > > software has multiple error models to wrangle. > > The only way for an application to know that the data has been written > successfully would be to issue a read after every write. That's not a > performance hit most applications are willing to take. And, of course, > the media can still go bad at a later time, so it only guarantees the > data is accessible immediately after having been written. > > What I'm suggesting is that we should not complete a write successfully > if we know that the data will not be retrievable. I wouldn't call this > adding an extra error model to contend with. Applications should > already be checking for errors on write. > > Does that make sense? Are we talking past each other? The badblock list is late to update in both directions, late to add entries that the scrub needs to find and late to delete entries that were inadvertently cleared by cache-line writes that did not first ingest the poison for a read-modify-write. So I see the above as being wishful in using the error list as the hard source of truth and unfortunate to up-level all sub-sector error entries into full PAGE_SIZE data offline events. I'm hoping we can find a way to make the error handling more fine grained over time, but for the current patch, managing the blast radius as PAGE_SIZE granularity at least matches the zero path with the write path. > > Setting that aside we can start with just treating zeroing the same as > > the copy_from_iter() case and fail the I/O at the dax_direct_access() > > step. > > OK. > > > I'd rather have a separate op that filesystems can use to clear errors > > at block allocation time that can be enforced to have the correct > > alignment. > > So would file systems always call that routine instead of zeroing, or > would they first check to see if there are badblocks? The proposal is that filesystems distinguish zeroing from free-block allocation/initialization such that the fsdax implementation directs initialization to a driver callback. This "initialization op" would take care to check for poison and clear it. All other dax paths would not consult the badblocks list. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel