On 2024/4/11 22:55, Theodore Ts'o wrote:
On Thu, Apr 11, 2024 at 03:37:18PM +0200, Jan Kara wrote:
The vendor
has confirmed that only 512-byte atomicity can be ensured in the firmware.
Although the valid data is only 60 bytes, the entire commit block is used
for calculating
the checksum.
jbd2_commit_block_csum_verify:
...
calculated = jbd2_chksum(j, j->j_csum_seed, buf, j->j_blocksize);
...
Ah, indeed. This is the bit I've missed. Thanks for explanation! Still I
think trying to somehow automatically deal with wrong commit block checksum
is too dangerous because it can result in fs corruption in some (unlikely)
cases. OTOH I understand journal replay failure after a power fail isn't
great either so we need to think how to fix this...
Unfortunately, the only fix I can think of would require changing how
we do the checksum to only include the portion of the jbd2 block which
contains valid data, per the header field. This would be a format
change which means that if a new kernel writes the new jbd2 format
(using a journal incompat flag, or a new checksum type), older kernels
and older versions of e2fsprogs wouldn't be able to validate the
journal. So rollout of the fix would have to be carefully managed.
- Ted
.
I thought of a solution that when the commit block checksum is
incorrect, retain the
first 512 bytes of data, clear the subsequent data, and then calculate
the checksum
to see if it is correct. This solution can distinguish whether the
commit is complete
for components that can ensure the atomicity of 512 bytes or more. But
for HDD,
it may not be able to distinguish, but it should be alleviated to some
extent.