Re: [PATCH] generic: skip dm-log-writes tests on XFS v5 superblock filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/2/27 下午12:49, Amir Goldstein wrote:
[snip]
>>> Indeed.
>>
>> May I ask a stupid question?
>>
>> How does it matter whether the device is clean or not?
>> Shouldn't the journal/metadata or whatever be self-contained?
>>
> 
> Yes and no.
> 
> The most simple example (not limited to xfs and not sure it is like that in xfs)
> is how you find the last valid journal commit entry. It should have correct CRC
> and the largest LSN. But it you replay IO on top of existing journal without
> wiping it first, then journal recovery will continue past the point to meant to
> replay or worse.

The journal doesn't have some superblock like system to tell where to
stop replaying?

If not, then indeed we need to discard the journal before writing new one.

> 
> The problem that Brian describes is more complicated than that and not
> limited to the data in the journal IIUC, but I think what I described above
> may plague also ext4 and xfs v4.
> 
>>>
>>>> i.e. this sounds like a "block device we are replaying onto has
>>>> stale data in it" problem because we are replaying the same
>>>> filesystem over the top of itself.  Hence there are no unique
>>>> identifiers in the metadata that can detect stale metadata in
>>>> the block device.
>>>>
>>>> I'm surprised that we haven't tripped over this much earlier that
>>>> this...
>>>>
>>>
>>> I remember asking myself the same thing... it's coming back to me
>>> now. I really remember having this discussion during test review.
>>> generic/482 is an adaptation of Josef's test script [1], which
>>> does log recovery onto a snapshot on every FUA checkpoint.
>>>
>>> [1] https://github.com/josefbacik/log-writes/blob/master/replay-fsck-wrapper.sh
>>>
>>> Setting up snapshots for every checkpoint was found empirically to take
>>> more test runtime, than replaying log from the start for each checkpoint.
>>> That observation was limited to the systems that Qu and Eryu tested on.
>>>
>>> IRC, what usually took care of cleaning the block device is replaying the
>>> "discard everything" IO from mkfs time.
>>
>> This "discard everything" assumption doesn't look right to me.
>> Although most mkfs would discard at least part of the device, even
>> without discarding the newly created fs should be self-contained, no
>> wild pointer points to some garbage.
>>
> 
> It's true. We shouldn't make this assumption.
> That was my explanation to Dave's question, how come we didn't see
> this before?
> 
> Here is my log-writes info from generic/482:
> ./src/log-writes/replay-log -vv --find --end-mark mkfs --log
> $LOGWRITES_DEV |grep DISCARD
> seek entry 0@2: 0, size 8388607, flags 0x4(DISCARD)
> seek entry 1@3: 8388607, size 8388607, flags 0x4(DISCARD)
> seek entry 2@4: 16777214, size 4194306, flags 0x4(DISCARD)
> 
>> I though all metadata/journal write should be self-contained even for
>> later fs writes.
>>
>> Am I missing something? Or do I get too poisoned by btrfs CoW?
>>
> 
> I'd be very surprised if btrfs cannot be flipped by seeing stale data "from
> the future" in the block device. Seems to me like the entire concept of
> CoW and metadata checksums is completely subverted by the existence
> of correct checksums on "stale metadata from the future".

It seems that metadata CoW makes it impossible to see future data.

All btree trees get updated CoW, so no metadata will be overwritten
during one transaction.
Only super block is overwritten and normally superblock is updated
atomically.

So either old superblock is still here, all we can see is old tree pointers.
Or new superblock is here, all we can see is new tree pointers.
And new metadata will never be written into old metadata, there is no
way to see future metadata.

Thanks,
Qu

> 
> Thanks,
> Amir.
> 

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux