Re: Re[2]: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John your email address changed, are you subscribed to the list?

On Wed, Mar 6, 2019 at 8:09 PM John Zitterkopf <no_spam@xxxxxxxxxxxx> wrote:
>
> And what do you get for
>
> # grep -r md3 /etc/lvm
>
> if you get a bunch of hits in archive and backup, then maybe a good
> chance there's LVM metdata in that zero'd out 1MB;
>
> Can I do this command with the drives mounted outside the NAS enclosure?

Likely. Just takes more understanding of the storage stack than I have
at the moment and I can't figure it out from the reddit output.

# blkid
# pvs
# vgs
# lvs
# mdadm -E /dev/sda6

These are all read only commands, they don't change anything.

> What do you get for
>
> # cat /proc/mdstat
>
> Again; On the test PC (not in NAS enclosure the following is returned:)
>
> Personalities : [raid6] [raid5] [raid4]
> md2 : active (auto-read-only) raid5 sdc5[4] sdb5[1] sda5[0]
>       2915794368 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UU_U]
>
> md3 : active (auto-read-only) raid5 sdc6[3] sda6[0] sdb6[1]
>       8790740736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UU_U]

OK so you have a degraded four member raid5 array with a 64KiB strip
size. One drive is missing or has failed...


> unused devices: <none>
>
> Yes; the third member is not connected right now. I'm "saving" it for any forensics.

When was it disconnected? Before the 1MB wipe of /dev/md3 or after?

I'm not following the forensics logic. The instant you wiped 1MB of
/dev/md3, that write was propagated by raid5 to all four drives in
less than 1 second, unless drive number 2 was already removed.


> My assumption as well. I don't know how Synology "mounts" these devices or how they are laid out.
> Part of my hopes that /dev/md2 is the "root"fs for the NAS and that /dev/md3 is the "volume1" filesystem which is the actual data of the NAS array. IF that assumption holds true; then the "backup" would be on the corrupted /dev/md3 filesystem in my user's home directory.

In your reddit thread with vgdisplay and lvdisplay, /dev/md2 and
/dev/md3 are PV's in a single VG "vg1000" and from that there is a
single lv that's the same size as that VG. So I don't think root is on
/dev/md2 - but... I don't really know how this storage stack is built.


> Side note; I do have a copy of ONE of the member drives from the
> array. At the time the raid was zero-ed... it was physically
> disconnected from the system in a antistatic bag. IF so, Could that be
> used to "rebuild" the first 1M?

Not entirely because it only contains 1/4 of that missing 1MiB, and in
64KiB pieces. That might be enough to get a hint of what was in that
1MiB. But I'm hoping for an easier way.

This drive is a copy of which of the four? Note mdadm counts members
starting with 0, so if you look at the mdadm -D output, the missing
drive is number 2 (which would be the third drive if you're counting
from one).


> Again; I'm about 90% sure this needs to be done with the drives in the NAS enclosure. Guess it's time to swap them back into the nas box.

Skip that for now. At least it's safe in the current arrangement. I
want to know more before you put them in the NAS, let alone all four
drives back together again which would normally cause a resync to
start, but if nothing has changed on the 1,2,4 drives since 3 was kept
out at assembly time, there shouldn't be anything to resync.


--
Chris Murphy




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux