在 2025/2/3 18:34, Johannes Thumshirn 写道:
On 03.02.25 08:56, Christoph Hellwig wrote:
On Mon, Feb 03, 2025 at 07:47:53AM +0000, Johannes Thumshirn wrote:
The thing I don't like with the current RFC patchset is, it breaks
scrub, repair and device error statistics. It nothing that can't be
solved though. But as of now it just doesn't make any sense at all to
me. We at least need the FS to look at the BLK_STS_PROTECTION return and
handle accordingly in scrub, read repair and statistics.
And that's only for feature parity. I'd also like to see some
performance numbers and numbers of reduced WAF, if this is really worth
the hassle.
If we can store checksums in metadata / extended LBA that will help
WAF a lot, and also performance becaue you only need one write
instead of two dependent writes, and also just one read.
Well for the WAF part, it'll save us 32 Bytes per FS sector (typically
4k) in the btrfs case, that's ~0.8% of the space.
You forgot the csum tree COW part.
Updating csum tree is pretty COW heavy and that's going to cause quite
some wearing.
Thus although I do not think the RFC patch makes much sense compared to
just existing NODATASUM mount option, I'm interesting in the hardware
csum handling.
The checksums in the current PI formats (minus the new ones in NVMe)
aren't that good as Martin pointed out, but the biggest issue really
is that you need hardware that does support metadata or PI. SATA
doesn't support it at all. For NVMe PI support is generally a feature
that is supported by gold plated fully featured enterprise devices
but not the cheaper tiers. I've heard some talks of customers asking
for plain non-PI metadata in certain cheaper tiers, but not much of
that has actually materialized yet. If we ever get at least non-PI
metadata support on cheap NVMe drives the idea of storing checksums
there would become very, very useful.
The other pain point of btrfs' data checksum is related to Direct IO and
the content change halfway.
It's pretty common to reproduce, just start a VM with an image on btrfs,
set the VM cache mode to none (aka, using direct IO), and run XFS/EXT4
inside the VM, run some fsstress it should cause btrfs to hit data csum
mismatch false alerts.
The root cause is the content change during direct IO, and XFS/EXT4
doesn't wait for folio writeback before dirtying the folio (if no
AS_STABLE_WRITES set).
That's a valid optimization, but that will cause contents change.
(I know there is the AS_STABLE_WRITES, but I'm not sure if qemu will
pass that flag to virtio block devices inside the VM)
And with btrfs' checksum calculation happening before submitting the
real bio, it means if the contents changed after the csum calculation
and before bio finished, we will got csum mismatch.
So if the csum can happening inside the hardware, it will solve the
problem of direct IO and csum change.
Thanks,
Qu
FYI, I'll post my hacky XFS data checksumming code to show how relatively
simple using the out of band metadata is for file system based
checksumming.