On Mon, Mar 17, 2025 at 04:55:24PM -0400, John Stoffel wrote: > >>>>> "Kent" == Kent Overstreet <kent.overstreet@xxxxxxxxx> writes: > > > Roland Vet spotted a good one: currently, rebalance/copygc get stuck if > > we've got an extent that can't be read, due to checksum error/bitrot. > > > This took some doing to fix properly, because > > > - We don't want to just delete such data (replace it with > > KEY_TYPE_error); we never want to delete anything except when > > explicitly told to by the user, and while we don't yet have an API for > > "read this file even though there's errors, just give me what you > > have" we definitely will in the future. > > So will open() just return an error? How will this look from > userspace? Not the open, the read - the typical case is only a single extent goes bad; it's like any other IO error. > > - Not being able to move data is a non-option: that would block copygc, > > device removal, etc. > > > - And, moving said extent requires giving it a new checksum - strictly > > required if the move has to fragment it, teaching the write/move path > > about extents with bad checksums is unpalateable, and anyways we'd > > like to be able to guard against more bitrot, if we can. > > Why does it need a new checksum if there are read errors? What > happens if there are more read errors? If I have a file on a > filesystem which is based in spinning rust and I get a single bit > flip, I'm super happy you catch it. The data move paths very strictly verify checksums as they move data around so they don't introduce bitrot. I'm not going to add if (!bitrotted_extent) checksum(); else no_checksum() Eww... Besides being gross, we also would like to guard against introducing more bitrot. > But now you re-checksum the file, with the read error, and return it? > I'm stupid and just a user/IT guy. I want notifications, but I don't > want my application to block so I can't kill it, or unmount the > filesystem. Or continue to use it if I like. The aforementioned poison bit ensures that you still get the error from the original checksum error when you read that data - unless you use an appropriate "give it to me anyways" API. > > So that means: > > > - Extents need a poison bit: "reads return errors, even though it now > > has a good checksum" - this was added in a separate patch queued up > > for 6.15. > > Sorry for being dense, but does a file have one or more extents? Or > is this at a level below that? Files have multiple extents. An extent is one contiguous range of data, and in bcachefs checksums are at the extent level, not block, so checksummed (and compressed) extents are limited to, by default, 128k. > > It's an incompat feature because it's a new extent field, and old > > versions can't parse extents with unknown field types, since they > > won't know their sizes - meaning users will have to explicitly do an > > incompat upgrade to make use of this stuff. > > > - The read path needs to do additional retries after checksum errors > > before giving up and marking it poisoned, so that we don't > > accidentally convert a transient error to permanent corruption. > > When doing these retries, is the filesystem locked up or will the > process doing the read() be blocked from being killed? The process doing the read() can't be killed during this, no. If requested this could be changed, but keep in mind retries are limited in number. Nothing else is "locked up", everything else proceeds as normal. > > - The read path gets a whole bunch of work to plumb precise modern error > > codes around, so that e.g. the retry path, the data move path, and the > > "mark extent poisoned" path all know exactly what's going on. > > > - Read path is responsible for marking extents poisoned after sufficient > > retry attempts (controlled by a new filesystem option) > > > - Data move path is allowed to move extents after a read error, if it's > > a checksum error (giving it a new checksum) if it's been poisoned > > (i.e. the extent flags feature is enabled). > > So if just a single bit flips, the extent gets moved onto better > storage, and the file gets re-checksummed. But what about if more > bits go bad afterwards? The new checksum means they're detected, and if you have replication enabled they'll be corrected automatically, like any other IO error.