Re[2]: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



<<# blkid>>
This was done without the mdadm -Asf command.

/dev/sdc1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="44d4d072-3ba8-4311-8157-0ac1dc51366c" /dev/sdc2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="bf851b7b-7b3b-4ab7-8415-5f901bb6f14c" /dev/sdc5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="0eb6400b-b985-2a17-f211-56ccbd14ca10" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="cae34893-fcde-4f94-8270-b3ad92fe0616" /dev/sdc6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="0f8c3b31-a733-542b-f10c-2226809f4cf2" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="16e08212-c393-4b5b-b755-dfa9059b8479" /dev/sda1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="805e508c-c480-4d46-9f70-d928f59e0cf5" /dev/sda2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="32272db4-4819-4d9f-af73-bf23757c32bc" /dev/sda5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="dc7ce307-1ded-88a6-cd85-d82ad7cefe67" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="07de2062-ae1f-40c2-a34b-920c38c48eaf" /dev/sda6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="b3638502-e2db-f789-f469-0f3bc7955fe3" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="eb4c470f-3eb5-443e-885a-d027bdf1f193" /dev/sdb1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="d70afd0f-6e25-4886-91e8-01ffe1f14006" /dev/sdb2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="04a3c8a5-098b-4a74-88ec-2388e61a8287" /dev/sdb5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="9190d8ea-a9c3-9d07-357a-c432394c0a48" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="cd1e030a-d307-413f-8d57-c78c13593c15" /dev/sdb6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="091232be-a5a8-bb9a-7ed1-cde074fccc4b" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="db576be0-58fa-47e4-aa2f-8dc626f23212"
/dev/md2: UUID="RjBvSN-Lzko-zqTI-71FD-ESv7-OrPd-uLUeIC" TYPE="LVM2_member"
/dev/md3: PTUUID="1828c708-ca70-4672-9095-a1ee53065320" PTTYPE="gpt"

<<# pvs>>

WARNING: Device for PV pGwi1O-p59A-YOK9-EEYY-eMbu-1gkt-IJmvvG not found or rejected by a filter.
PV         VG     Fmt  Attr PSize  PFree
/dev/md2   vg1000 lvm2 a--  <2.72t    0
[unknown]  vg1000 lvm2 a-m  <8.19t    0


<<# vgs>>
WARNING: Device for PV pGwi1O-p59A-YOK9-EEYY-eMbu-1gkt-IJmvvG not found or rejected by a filter. VG #PV #LV #SN Attr VSize VFree vg1000 2 1 0 wz-pn- 10.90t 0

<<# lvs>>
  WARNING: Device for PV pGwi1O-p59A-YOK9-EEYY-eMbu-1gkt-IJmvvG not found or rejected by a filter.   LV   VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert   lv   vg1000 -wi-----p- 10.90t                                                   

<<# mdadm -E /dev/sda6>>

/dev/sda6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 340a678e:167ca3d9:c185d6c8:a1d66183
           Name : Zittware-NAS916:3
  Creation Time : Thu May 25 01:26:52 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB)
     Array Size : 8790740736 (8383.50 GiB 9001.72 GB)
  Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : b3638502:e2dbf789:f4690f3b:c7955fe3

    Update Time : Sat Mar  2 01:09:20 2019
       Checksum : f7119511 - correct
         Events : 16134

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
 
<< OK so you have a degraded four member raid5 array with a 64KiB strip
size. One drive is missing or has failed...>>

Correct; as I stated the 4th drive (drive 2) is removed (disconnected) as it was not attached to the NAS enclosure (or anything for that matter) when the /dev/md3 1M wipe was done.


<< When was it disconnected? Before the 1MB wipe of /dev/md3 or after? >>

After doing a data scrub of the entire array; but Disconnected before the 1MB wipe. The Enclosure only has 4 drive bays; and I needed one bay for the new 10TB drive.

<<I'm not following the forensics logic. The instant you wiped 1MB of
/dev/md3, that write was propagated by raid5 to all four drives in
less than 1 second, unless drive number 2 was already removed.>>

Drive 2 (a 4TB drive) was already removed for a "rebuild" of the array using the 10TB drive. It was not present in the array when the 1M bs wipe was done to /dev/md3

<<This drive is a copy of which of the four? Note mdadm counts members
starting with 0, so if you look at the mdadm -D output, the missing
drive is number 2 (which would be the third drive if you're counting
from one).>>

On the Synology NAS the "bay" is drive 3... but you are probably correct; that it was just marketing to make it easy for the layman to understand which drive.

<< Skip that for now. At least it's safe in the current arrangement. I
want to know more before you put them in the NAS, let alone all four
drives back together again which would normally cause a resync to
start, but if nothing has changed on the 1,2,4 drives since 3 was kept
out at assembly time, there shouldn't be anything to resync. >>

Roger. Will do. Drive 2 in my test pc; but power and SATA are disconnected from it.

----

One person on Reddit suggested tonight (like you did) that a backup may exist on the NAS in /etc/lvm.
https://www.reddit.com/r/DataHoarder/comments/aws9iv/btrfs_shr_rookie_mistake_dd_bs1m/ehynwr3
I haven't tried booting without the drives in it... I kinda feel like the drives have to be in the system to actually see the /etc/lvm area. I don't want to make matters worse than they already are; so I'm holding tight for specific suggestions.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux