On 2024/6/20 10:50, Theodore Ts'o wrote:
Apologies for not getting back to this until now; I was focused on
finalizing changes for the merge window, and then I was on vacation
for the 3 or 4 weeks.
On Thu, Apr 25, 2024 at 02:45:15PM +0800, Ye Bin wrote:
From: Ye Bin <yebin10@xxxxxxxxxx>
We encountered a problem that the file system could not be mounted in
the power-off scenario. The analysis of the file system mirror shows that
only part of the data is written to the last commit block.
The valid data of the commit block is concentrated in the first sector.
However, the data of the entire block is involved in the checksum calculation.
For different hardware, the minimum atomic unit may be different.
If the checksum of a committed block is incorrect, clear the data except the
'commit_header' and then calculate the checksum. If the checkusm is correct,
it is considered that the block is partially committed.
This makes a lot of sense; thanks for changing the patch to do this.
However, if there are valid description/revoke blocks, it is
considered that the data is abnormal and the log replay is stopped.
I'm not sure I understand your thinking behind this part of the patch,
though. The description/revoke blocks will have their own checksum,
and while I grant that it would be... highly unusual for the commit
block to be partially written as the result of a torn write, and then
for there to be subsequent valid descriptor or revoke blocks (which
would presumably be part of the next transaction), I wonder if the
extra complexity is worth it.
I can't think of a situation where this might happen other than say, a
bit flip in the portion of commit block where we don't care about its
contents; but in that case, after zeroing out parts of the commit
block that we don't care about, if the checksum is valid, presumably
we would have managed to luckily recover from the bit flip. So
continuing shouldn't be risky.
We cannot fundamentally solve malicious data tampering by searching for
valid logs
through scanning. My idea in doing this is that even if the kernel knows
there is a
problem with the data and insists on replaying logs, it is not good. We
should let the
user decide what to do. But when i think about it, doing log recovery
based on what
we believe to be the correct scan results is actually a presumptuous
claim. I agree with
your point of view that the problem should not be complicated.
I will delete these judgments and send another version.
What am I missing?
- Ted
.