Re: About the md-bitmap behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 22, 2022 at 5:39 PM Qu Wenruo <quwenruo.btrfs@xxxxxxx> wrote:
>
[...]
> E.g.
> btrfs uses 64KiB as stripe size.
> O = Old data
> N = New writes
>
>         0       32K     64K
> D1      |OOOOOOO|NNNNNNN|
> D2      |NNNNNNN|OOOOOOO|
> P       |NNNNNNN|NNNNNNN|
>
> In above case, no matter if the new write reaches disks, as long as the
> crash happens before we update all the metadata and superblock (which
> implies a flush for all involved devices), the fs will only try to read
> the old data.

I guess we are using "write hole" for different scenarios. I use "write hole"
for the case that we corrupt data that is not being written to. This happens
with the combination of failed drive and power loss. For example, we have
raid5 with 3 drives. Each stripe has two data and one parity. When D1
failed, read to D1 is calculated based on D2 and P; and write to D1
requires updating D2 and P at the same time. Now imagine we lost
power (or crash) while writing to D2 (and P). When the system comes back
after reboot, D2 and P are out of sync. Now we lost both D2 and D1. Note
that D1 is not being written to before the power loss.

For btrfs, maybe we can avoid write hole by NOT writing to D2 when D1
contains valid data (and the drive is failed). Instead, we can write a new
version of D1 and D2 to a different stripe. If we loss power during the write,
the old data is not corrupted. Does this make sense? I am not sure
whether it is practical in btrfs though.

>
> So at this point, our data read on old data is still correct.
> But the parity no longer matches, thus degrading our ability to tolerate
> device lost.
>
> With write-intent bitmap, we know this full stripe has something out of
> sync, so we can re-calculate the parity.
>
> Although, all above condition needs two things:
>
> - The new write is CoWed.
>    It's mandatory for btrfs metadata, so no problem. But for btrfs data,
>    we can have NODATACOW (also implies NDOATASUM), and in that case,
>    corruption will be unavoidable.
>
> - The old data should never be changed
>    This means, the device can not disappear during the recovery.
>    If powerloss + device missing happens, this will not work at all.
>
> >
> > You must either:
> >   1/ have a safe duplicate of the blocks being written, so they can be
> >     recovered and re-written after a crash.  This is what journalling
> >     does.  Or
>
> Yes, journal would be the next step to handle NODATACOW case and device
> missing case.
>
> >   2/ Only write to location which don't contain valid data.  i.e.  always
> >     write full stripes to locations which are unused on each device.
> >     This way you cannot lose existing data.  Worst case: that whole
> >     stripe is ignored.  This is how I would handle RAID5 in a
> >     copy-on-write filesystem.
>
> That is something we considered in the past, but considering even now we
> still have space reservation problems sometimes, I doubt such change
> would cause even more problems than it can solve.
>
> >
> > However, I see you wrote:
> >> Thus as long as no device is missing, a write-intent-bitmap is enough to
> >> address the write hole in btrfs (at least for COW protected data and all
> >> metadata).
> >
> > That doesn't make sense.  If no device is missing, then there is no
> > write hole.
> > If no device is missing, all you need to do is recalculate the parity
> > blocks on any stripe that was recently written.
>
> That's exactly what we need and want to do.

I guess the goal is to find some files after crash/power loss. Can we
achieve this with file mtime? (Sorry if this is a stupid question...)

Thanks,
Song



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux