Re: Re[4]: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 7, 2019 at 12:26 AM John Zitterkopf <zittware@xxxxxxxxx> wrote:
> # blkid
>
> This was done without the mdadm -Asf command.
>
> /dev/sdc1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="44d4d072-3ba8-4311-8157-0ac1dc51366c"
> /dev/sdc2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="bf851b7b-7b3b-4ab7-8415-5f901bb6f14c"
> /dev/sdc5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="0eb6400b-b985-2a17-f211-56ccbd14ca10" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="cae34893-fcde-4f94-8270-b3ad92fe0616"
> /dev/sdc6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="0f8c3b31-a733-542b-f10c-2226809f4cf2" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="16e08212-c393-4b5b-b755-dfa9059b8479"
> /dev/sda1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="805e508c-c480-4d46-9f70-d928f59e0cf5"
> /dev/sda2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="32272db4-4819-4d9f-af73-bf23757c32bc"
> /dev/sda5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="dc7ce307-1ded-88a6-cd85-d82ad7cefe67" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="07de2062-ae1f-40c2-a34b-920c38c48eaf"
> /dev/sda6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="b3638502-e2db-f789-f469-0f3bc7955fe3" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="eb4c470f-3eb5-443e-885a-d027bdf1f193"
> /dev/sdb1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="d70afd0f-6e25-4886-91e8-01ffe1f14006"
> /dev/sdb2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="04a3c8a5-098b-4a74-88ec-2388e61a8287"
> /dev/sdb5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="9190d8ea-a9c3-9d07-357a-c432394c0a48" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="cd1e030a-d307-413f-8d57-c78c13593c15"
> /dev/sdb6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="091232be-a5a8-bb9a-7ed1-cde074fccc4b" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="db576be0-58fa-47e4-aa2f-8dc626f23212"
> /dev/md2: UUID="RjBvSN-Lzko-zqTI-71FD-ESv7-OrPd-uLUeIC" TYPE="LVM2_member"
> /dev/md3: PTUUID="1828c708-ca70-4672-9095-a1ee53065320" PTTYPE="gpt"

Ok so it's only seeing partitions 1,2,5,6 on each drive, and those are
mdadm members. Open questions are why we only see two md devices, and
why we don't see partitions 3 and 4. It's a rabbit hole.

The two arrays we know about are running, but no file systems are
discovered at all. I think it's weird that the VG contains /dev/md2
and /dev/md3 as PVs, and yet there's only one lv listed.

And also the gpt signature on md3 is weird, that suggests a backup gpt
at the end of md3. What do you get for:

# gdisk -l /dev/md3

If that command isn't found you can try

# parted /dev/md3 u s p

Both are read only commands.

And also the same for /dev/sda (without any numbers).


> One person on Reddit suggested tonight (like you did) that a backup may exist on the NAS in /etc/lvm.
> https://www.reddit.com/r/DataHoarder/comments/aws9iv/btrfs_shr_rookie_mistake_dd_bs1m/ehynwr3
> I haven't tried booting without the drives in it... I kinda feel like the drives have to be in the system to actually see the /etc/lvm area. I don't want to make matters worse than they already are; so I'm holding tight for specific suggestions.

a.) figure out how this thing assembles itself at boot time in order
to reveal the root to get at /etc/lvm; or b.) put the three drives in
the NAS and boot it. a) is tedious without a cheat sheet from
Synology.

The drive "md member 2/bay 3" that's currently out and out of sync
with the other three drives, has something that might be vaguely
interesting on it. I suggest doing two things at once:

Part I:
Put the three NAS drives that are in the PC back into the NAS and boot
(degraded), and collect the information we really want:
# blkid
# mount
# grep -r md3 /etc/lvm
# cat /etc/fstab

And hopefully that gives us a hint of what we're going to look for.

Part 2:
Put the "missing" md member number 2/bay 3 drive into the PC, booting
from Live media as you have been.

# mdadm -E /dev/sdX6  ##where X is the letter for that drive. It was
previously sdd on the NAS but it'll be assigned something else on the.
The 6 comes from your previously supplied `mdadm -D /dev/md3` command
which shows that the member device for this array is on the 6th
partition. You've already provided me the superblock for /dev/sda6
which is md member number 0; so I have something to compare to. I just
want to make sure that's the thing we want to copy. There is a data
offset of 1MB so my plan is to copy 1MB (quite a bit more than we
really need but whatever)

Hey any list regulars wanna check my math and logic here please?

Super says data offset is 2048 sectors so we should be able to do:

# dd if=/dev/sdX6 skip=2048 bs=1M count=1 of=/safepathtofile-sdX6.

1024KiB is what we want. And it'll take 6 full stripe writes for that
(5.3333). 64K chunk times 4 drives is a 256K full stripe. 6 stripes is
1536KiB. And the 1/4 portion of that is 384KiB. (Therefore
alternately: bs=1024 count=384).

And with left-symmetric layout:

0     1     2     P0
4     5     P1    3
8     P2    6     7
P3    9     10    11
12    13    14    P4

This unwiped md member 2 drive should be column 3, correct? So the
file resulting from the dd command above should contain strips: 2, P1,
6, 10, 14, P5.

So John, you can do the dd command above, again it's read only and
safe; or you can wait for hopefully someone else monitoring the thread
to chime in.

-- 
Chris Murphy



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux