<<Did they fix the LVM metadata on your system? Or just fixed it on a
copy they have?>>
They remote logged into my NAS and did a repair attempt on my live
disks (the three disks that got DD). I still have the original disk 3
which wasn't touched by the dd command. Before I gave them access; I
made backups of the 3 /dev/md3 disks onto the other 10TB drive I had.
<< If either you or they can run these commands, it'll help answer the
metadata redundancy question.
# btrfs insp dump-s -fa /dev/>>
ERROR: bad magic on superblock on /dev/md3 at 65536
ERROR: bad magic on superblock on /dev/md3 at 67108864
ERROR: bad magic on superblock on /dev/md3 at 274877906944
superblock: bytenr=65536, device=/dev/md3
---------------------------------------------------------
superblock: bytenr=67108864, device=/dev/md3
---------------------------------------------------------
superblock: bytenr=274877906944, device=/dev/md3
---------------------------------------------------------
<<# btrfs --version>>
btrfs-progs v4.0
I got the following response from my tech support rep. This would seem
to indicate they need to double check that they didn't make mistake.
"
It varies depending on the RAID type you use. For SHR, the way it
works is that we create a Linux RAID array on the disks, then layer a
volume group on top of that that the logical volume is created on, and
then create the filesystem on the LV. When you replace disks in an SHR
array to expand capacity, the NAS will create a second array using the
additional capacity on the new disks, then create a new PV in LVM and
add it to the volume group.
I'll also note that your message prompted our colleagues and myself to
investigate this further, and upon further investigation we noticed
that the configuration from the rescued LVM metadata doesn't fully
match the backup metadata on the original system partition. I'm
escalating back to our developers and asking them double check and
make sure that we correctly recreated the volume group and LV, and
I'll let you know as soon as I hear back.
"