Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]




On 2018年03月06日 17:03, Amir Goldstein wrote:
> On Tue, Mar 6, 2018 at 10:15 AM, Qu Wenruo <wqu@xxxxxxxx> wrote:
>> There are some btrfs corruption report in mail list for a while,
>> although such corruption is pretty rare and almost impossible to
>> reproduce, with dm-log-writes we found it's highly related to v1 space
>> cache.
>>
>> Unlike journal based filesystems, btrfs completely rely on metadata CoW
>> to protect itself from power loss.
>> Which needs extent allocator to avoid allocate extents on existing
>> extent range.
>> Btrfs also uses free space cache to speed up such allocation.
>>
>> However there is a problem, v1 space cache is not protected by data CoW,
>> and can be corrupted during power loss.
>> So btrfs do extra check on free space cache, verifying its own in-file csum,
>> generation and free space recorded in cache and extent tree.
>>
>> The problem is, under heavy concurrency, v1 space cache can be corrupted
>> even under normal operations without power loss.
>> And we believe corrupted space cache can break btrfs metadata CoW and
>> leads to the rare corruption in next power loss.
>>
>> The most obvious symptom will be difference in free space:
>>
>> This will be caught by kernel, but such check is quite weak, and if
>> the net free space change is 0 in one transaction, the corrupted
>> cache can be loaded by kernel.
>>
>> In this case, btrfs check would report things like :
>> ------
>> block group 298844160 has wrong amount of free space
>> failed to load free space cache for block group 298844160
>> ------
>>
>> But considering the test case are using notreelog, btrfs won't do
>> sub-transaction commit which doesn't increase generation, each
>> transaction should be consistent, and nothing should be reported at all.
>>
>> Further more, we can even found corrupted file extents like:
>> ------
>> root 5 inode 261 errors 100, file extent discount
>> Found file extent holes:
>>         start: 962560, len: 32768
>> ERROR: errors found in fs roots
>> ------
>>
> 
> So what is the expectation from this test on upstream btrfs?
> Probable failure? reliable failure?

Reliable failure, as the root reason is not fully exposed yet.

> Are there random seeds to fsstress that can make the test fail reliably?

Since concurrency is involved, I don't think seed would help much.

> Or does failure also depend on IO timing and other uncontrolled parameters?

Currently the concurrency would be the main factor.

>> +#
>> +#-----------------------------------------------------------------------
>> +# Copyright (c) SUSE.  All Rights Reserved.
> 
> 2018>
>> +#
[snip]
>> +
>> +_log_writes_replay_log_entry_range $prev >> $seqres.full 2>&1
>> +while [ ! -z "$cur" ]; do
>> +       _log_writes_replay_log_entry_range $cur $prev
>> +       # Catch the btrfs check output into temp file, as we need to
>> +       # grep the output to find the cache corruption
>> +       $BTRFS_UTIL_PROG check --check-data-csum $SCRATCH_DEV &> $tmp.fsck
> 
> So by making this a btrfs specific test you avoid the need to mount/umount and
> revert to $prev. Right?

Yes. Especially notreelog mount option disables journal-like behavior.

> 
> Please spell out the missing pieces for making a generic variant
> to this test, so if anyone wants to pick it up they have a good starting point.
> Or maybe you still intend to post a generic test as well?

I'm still working on the generic test, but the priority is the btrfs
corruption fixing.

For the missing pieces, we need dm-snapshot to make journal based
filesystems to replay their log without polluting the original device.

Despite that, current code should illustrate the framework.

> 
>> +
>> +       # Cache passed generation,csum and free space check but corrupted
>> +       # will be reported as error
>> +       if [ $? -ne 0 ]; then
>> +               cat $tmp.fsck >> $seqres.full
>> +               _fail "btrfs check found corruption"
>> +       fi
>> +
>> +       # Mount option has ruled out any possible factors affect space cache
>> +       # And we are at the FUA writes, no generation related problem should
>> +       # happen anyway
>> +       if grep -q -e 'failed to load free space cache' $tmp.fsck; then
>> +               cat $tmp.fsck >> $seqres.full
>> +               _fail "btrfs check found invalid space cache"
>> +       fi
>> +
>> +       prev=$cur
>> +       cur=$(_log_writes_find_next_fua $prev)
>> +       [ -z $cur ] && break
>> +
>> +       # Same as above
>> +       cur=$(($cur + 1))
>> +done
>> +
>> +echo "Silence is golden"
>> +
>> +# success, all done
>> +status=0
>> +exit
>> diff --git a/tests/btrfs/159.out b/tests/btrfs/159.out
>> new file mode 100644
>> index 00000000..e569e60c
>> --- /dev/null
>> +++ b/tests/btrfs/159.out
>> @@ -0,0 +1,2 @@
>> +QA output created by 159
>> +Silence is golden
>> diff --git a/tests/btrfs/group b/tests/btrfs/group
>> index 8007e07e..bc83db94 100644
>> --- a/tests/btrfs/group
>> +++ b/tests/btrfs/group
>> @@ -161,3 +161,4 @@
>>  156 auto quick trim
>>  157 auto quick raid
>>  158 auto quick raid scrub
>> +159 auto
> 
> Please add to "replay" group.
> 
> How long does the test run? Is it not quick?

It's quick, less than one minute.

But since it's using LOAD_FACTOR, fast doesn't seem proper here.

Thanks,
Qu

> 
> Thanks,
> Amir.
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux