Re: Re[16]: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 3, 2019 at 8:36 PM <no_spam@xxxxxxxxxxxx> wrote:
>
> Chris,
> I had Synology look at the enclosure and they reported:
> "
> Our developers have looked into this, and they were able to fix the
> corrupted LVM metadata.
> Unfortunately, it looks like the dd operation corrupted the Btrfs root
> tree, too. The root tree is the entrance of your filesystem and tells
> the Btrfs module where to find all the other trees--

Yes this is generally fatal. But it's also why Btrfs keeps duplicate
metadata by default. If in fact the accidental dd command was only a
1MiB erase, it seems rather unlikely the both copies of the metadata
were damaged. Sometimes they are co-located really close to each
other, and sometimes not (there's no rule for this in Btrfs).


> I've replied with the following request:
> "
> Can you provide details as to how the sinology nas sets this up with
> LVM and BTFS? Specifically; the Linux mail list was unable to
> determine how the partitions interrelate and how they might be mapped
> to the logical volumes.
>
> Is it possible that any of the roots might be available on the
> untouched physical disk 3? This disk was not present in the enclosure
> when the dd was preformed.
> "
> but I'm honestly a little lost as to what specifically to ask. Have
> you been able to formulate any specific questions I should ask them?

Did they fix the LVM metadata on your system? Or just fixed it on a
copy they have? If either you or they can run these commands, it'll
help answer the metadata redundancy question.

# btrfs insp dump-s -fa /dev/
# btrfs --version

I forget what the logical volume is named, but that's what goes in the
place of /dev/

Depending on the version of btrfs-progs being used, a newer version of
`btrfs restore` might have a better chance of scraping data off the
volume. Btrfs with redundant metadata usually just mounts (with lots
of scary complaints as it automatically fixes up damaged copies with
good copies). If a normal mount doesn't work, it means it's not
finding good copies.

-- 
Chris Murphy



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux