Re: [PATCH v3 2/3] block: verify data when endio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 29, 2019 at 10:17:22PM -0400, Martin K. Petersen wrote:
> 
> Jens,
> 
> > You will not need a callback in the bio, you will just have a private
> > end_io function for that particular bio that does the verification.
> 
> The saving grace for the integrity stuff is that once all the child bios
> complete, we no longer care about their completion context and we have
> the parent bio submitted by the filesystem we can use to verify the PI
> against.
> 
> For the redundant copy use case, however, I am guessing that the
> filesystem folks would want the same thing. I.e. verify the structure of
> the data received once the parent bio completes. However, at that point
> all the slicing and dicing completion state is lost.

Right, that's the problem. We already run the verifier on completion
of the bio that the filesytsem sends down the stack, but that then
means....

> And thus there is
> no way to know that the failure was due to mirror B two layers down the
> stack. Nor is there any way to retry the I/O without having recorded a
> completion breadcrumb trail for every child bio.

.... we have this problem when the verifier fails. i.e. the bio
needs to contain sufficient information for the filesystem to
implement some robust retry mechanism without having any clue what
lies below it or what failed.

> The other approach is the callback where each stacking layer--which
> knows about redundancy--can do verification of a bio upon completion.

*nod*

> However, that suffers from another headache in that the I/O can get
> arbitrarily sliced and diced in units of 512 bytes.

Right, but we don't need support that insane case. Indeed, if
it wasn't already obvious, we _can't support it_ because the
filesystem verifiers can't do partial verification. i.e.  part of
the verification is CRC validation of the whole bio, not to mention
that filesystem structure fragments cannot be safely parsed,
interpretted and/or verified without the whole structure first being
read in.

This means the verifier is only useful if the entire IO can be
passed down to the next layer. IOWs, if the bio has to be sliced and
diced to be issued to the next layer down, then we have a hard stop
on verifier propagation. Put simply, the verifier can only be run at
the lowest layer that sees the whole parent bio context. Hence
sliced and diced child bios won't have the parent verifier attached
to them, and so we can ignore the whole "slice and dice" problem
altogether.

Further, arguing about slicing and dicing misses the key observation
that the filesystem can largely avoid slicing and dicing for most
common cases. i.e. the IO sizes (XFS metadata!) we are talking about
here and their alignment to the underlying block devices are very
small and so are extremely unlikely to cross multi-device
boundaries.  And, of course, if the underlying device can't verify
the biofor whatever reason, we'll still do it at the filesystem IO
completion and so detect corruption like we do now.

IOWs, we need to look at this problem from a "whole stack" point of
view, not just cry about how "bios are too flexible and so make this
too hard!". The filesystem greatly constrains the alignment and
slicing/dicing problem to the point where it should be non-existent,
we have a clearly defined hard stop where verifier propagation
terminates, and if all else fails we can still detect corruption at
the filesystem level just like we do now. The worst thing that
happens here is we give up the capability for automatic block device
recovery and repair of damaged copies, which we can't do right now,
so it's essentially status quo...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux