Re[8]: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



<<OK so what do you get for

# mdadm -E /dev/sda1

That could be your root fs with mdadm metadata v1.0 or 0.9 which is
why it shows up as ext4 and also a raid flag.>>


/dev/sda6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 340a678e:167ca3d9:c185d6c8:a1d66183
           Name : Zittware-NAS916:3
  Creation Time : Thu May 25 01:26:52 2017
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 5860493856 (2794.50 GiB 3000.57 GB)
     Array Size : 8790740736 (8383.50 GiB 9001.72 GB)
  Used Dev Size : 5860493824 (2794.50 GiB 3000.57 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=32 sectors
          State : clean
    Device UUID : 62201ad0:0158f31a:ac35b379:7f13a583

    Update Time : Sat Mar  2 01:09:20 2019
       Checksum : 348b1754 - correct
         Events : 16134

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)


<<The usual reason why blkid does not return with any information is
because the user you're logged in as is not root. You might need to
do:

$ sudo blkid>>

Good call. Don't know why I didn't think of that. I guess I was expecting a permission denied error instead of blank output.

/dev/sdd1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="44d4d072-3ba8-4311-8157-0ac1dc51366c" /dev/sdd2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="bf851b7b-7b3b-4ab7-8415-5f901bb6f14c" /dev/sdd5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="0eb6400b-b985-2a17-f211-56ccbd14ca10" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="cae34893-fcde-4f94-8270-b3ad92fe0616" /dev/sdd6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="0f8c3b31-a733-542b-f10c-2226809f4cf2" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="16e08212-c393-4b5b-b755-dfa9059b8479" /dev/sda1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="805e508c-c480-4d46-9f70-d928f59e0cf5" /dev/sda2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="32272db4-4819-4d9f-af73-bf23757c32bc" /dev/sda5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="dc7ce307-1ded-88a6-cd85-d82ad7cefe67" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="07de2062-ae1f-40c2-a34b-920c38c48eaf" /dev/sda6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="b3638502-e2db-f789-f469-0f3bc7955fe3" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="eb4c470f-3eb5-443e-885a-d027bdf1f193" /dev/sdb1: UUID="8cd11542-15f1-4c2c-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="d70afd0f-6e25-4886-91e8-01ffe1f14006" /dev/sdb2: UUID="73451bf6-121b-75f1-f08f-e43e8582a597" TYPE="linux_raid_member" PARTUUID="04a3c8a5-098b-4a74-88ec-2388e61a8287" /dev/sdb5: UUID="542cb926-b17b-a538-9565-3afcc0d35a3c" UUID_SUB="9190d8ea-a9c3-9d07-357a-c432394c0a48" LABEL="Zittware-NAS916:2" TYPE="linux_raid_member" PARTUUID="cd1e030a-d307-413f-8d57-c78c13593c15" /dev/sdb6: UUID="340a678e-167c-a3d9-c185-d6c8a1d66183" UUID_SUB="091232be-a5a8-bb9a-7ed1-cde074fccc4b" LABEL="Zittware-NAS916:3" TYPE="linux_raid_member" PARTUUID="db576be0-58fa-47e4-aa2f-8dc626f23212" /dev/sdeb1: UUID="dd7adcd9-4f09-a752-6bad-3242191f09c2" UUID_SUB="0522cd4c-d264-d1c3-d8f0-f288ab7e8283" LABEL="Zittware-NAS:4" TYPE="linux_raid_member" PARTUUID="961019f4-01" /dev/md0: LABEL="1.42.6-5698" UUID="b2d6eb2b-3946-4b5b-83e8-12e2880fb83a" TYPE="ext4"
/dev/md1: UUID="3af0ced3-86c9-4d74-837f-1a9b0179cbbc" TYPE="swap"
/dev/zram0: UUID="4ac1c18f-a1e4-4a8b-b161-5c6160aff42f" TYPE="swap"
/dev/zram1: UUID="2cbe425f-1a74-4d68-bc44-7ddf18113028" TYPE="swap"
/dev/zram2: UUID="5f199d63-6617-4954-a20a-5abb6c957749" TYPE="swap"
/dev/zram3: UUID="7e9deea9-c9e5-4a67-9b62-1bd56b37a43e" TYPE="swap"
/dev/md4: LABEL="2017.05.25-01:26:55 v15101" UUID="f4bc4bd1-af0d-4d54-9b05-6a719ddec086" UUID_SUB="e40c544c-3d7d-4e01-93af-a5b70985d4f1" TYPE="btrfs"
/dev/md2: UUID="RjBvSN-Lzko-zqTI-71FD-ESv7-OrPd-uLUeIC" TYPE="LVM2_member"
/dev/synoboot1: SEC_TYPE="msdos" UUID="3179-DD88" TYPE="vfat" PARTUUID="f0c6ebb5-01" /dev/synoboot2: SEC_TYPE="msdos" UUID="317D-E98D" TYPE="vfat" PARTUUID="f0c6ebb5-02"
/dev/md3: PTUUID="1828c708-ca70-4672-9095-a1ee53065320" PTTYPE="gpt"


<<<Yeah bingo. So that's the real root file system. OK so we'd have had
to manually assemble that first partition, to spin up /dev/md0, then
mount it, in order to get access to /etc/lvm - from the test PC. But
you've put it in the NAS and you have /etc/lvm so it's fine.>>

Two different backups have been made. One Zip. One Tar.

<< How did you backup /dev/md3 to that 10T drive by the way? What was the command? >>

Exact command was:
# dd if=/dev/md3 bs=100M conv=sync,noerror | pv -s 9T | dd of=/dev/sdd1

<<OK so I'm gonna guess that /dev/root is a label symlink to /dev/md0
you could do an 'ls -l /dev/` and look through that whole list for
root and see if it points to /dev/md0. >>

That's the first thing I did (look for /dev/root) and it wasn't there.
I think it's unimportant right now because we clearly have access to the rootfs on the NAS. That somewhat reinforces the believe that /dev/md3 is the "Volume1" which has all of my data in it. :S

<< ($searchdev) might be /dev/md3 on the NAS; or maybe /dev/sdX if it's
the 10T backup on your test PC, might be slightly faster on the 10T
because no RAID parity reconstruction is needed. >>

I'll kick off the search on the test pc w/ 10TB backup mounted.
Additionally; I've powered off the NAS with the corrupted /dev/md3 for the night. No use running the drives or risking additional damage due to power outages or the like while we're in some bad state.


<<John this is a safe command, read only, but it might have to go
through 8TB to find what we're after. And you'll have to save the
entire output from it (copy paste to a text file is fine). Those
offsets are where we'll have to do yet another search+extract of the
1MiB we want, and sanity check it. But if my command is wrong, it's
8TB searched for nothing so maybe wait and see if anyone chimes in.>>

I'll nohup the output and redirect to a text file while I'm sleeping... just incase.

<< OK so that's as expected. Data begins 1MiB into /dev/sda6. Command to
read a MB of that

$ sudo dd if=/dev/sda6 skip=2048 count=2048 of=/tmp/sda6missing1M.bin


updated file is on gdrive.
https://drive.google.com/open?id=1A4e2UnzCiN0JUcJZdHe3QwXZa55-kMpd
I didn't think to run it to thru the hexdump like you suggested; I'll do that tomorrow.






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux