Re: Linux Raid + BTRFS: rookie mistake ... dd bs=1M

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 6, 2019 at 3:48 PM <zittware@xxxxxxxxxxxxxxxxxxxxxxx> wrote:

> I ended up doing zeroing the first 1M sectors of my live /dev/md3
> array. :homer doh:
> # dd if=/dev/md3 of=<some useful filename> bs=1M count=1

OK I had to go to the reddit thread to parse this, that's your backup
command. You did subsequently do:

# dd if=/dev/zero of=/dev/md3 bs=1M count=1

But apparently the backup copy you made was stored on the now damaged
md volume, so we don't have it? Do I understand that correctly?

By the way that command won't damage the mdadm metadata on any of the
member drives, instead it damages either the logical volume manager
metadata that's on that md logical device; or the file system on that
md logical device, or both. So you have to figure out what was in that
1MB you zero'd. And you think you've got a 1MB backup file for it?

I'd say your best chance of success is to search all the drives for
the "magic" in that 1MB file if you know what should be in that 1MB
file. So you have to find out (maybe someone on this list knows but I
have no idea) how Synology builds their arrays, and what would likely
be in the first 1MB of /dev/md3. Is there a /dev/md1 or /dev/md2 or
/dev/md4?

And what do you get for

# grep -r md3 /etc/lvm

if you get a bunch of hits in archive and backup, then maybe a good
chance there's LVM metdata in that zero'd out 1MB;

What do you get for

# cat /proc/mdstat
# mdadm -D /dev/md3

Those are read only commands they don't change anything. If you're
going to have a chance of saving the array you need to be extremely
deliberate about any changes you make from this point on.

>dditionally; while I did back up some
> critical data... it would be quiet painful to restore that data... and
> there would be data loss.

The mere fact it's painful to have reconstruct this array tells me you
don't have an adequate backup strategy. User error is a really common
cause of data loss. Just saying.

> My questions are still pretty basic. I'm looking for basic information
> as to how to start repairing this mistake. Ideally I'd like to somehow
> restore enough functionality to pull off the "backup" file then use
> that backup file to restore ?complete? access to the array.
>
> Is this even possible?

Maybe. You haven't told us where the backup file is. You've only told
us you didn't copy it to a USB drive. So I'm assuming it's somewhere
on /dev/md3. So we don't know yet what we're looking for.

> Side note; I do have a copy of ONE of the member drives from the
> array. At the time the raid was zero-ed... it was physically
> disconnected from the system in a antistatic bag. IF so, Could that be
> used to "rebuild" the first 1M?

Maybe. It depends which device it was; depends on what kind of RAID is
being used; depends on the chunk size; depends on how old it is. That
missing 1MB might contain static LVM metadata or it might contain a
file system superblock, and a stale superblock is tedious to have to
reconstruct to make it current and valid.

> Is there a backup copy mbr/partitions somewhere I could use to restore?

Not applicable, your zeroing has wiped a middle layer of the storage
stack, where the partition map is the start of the storage stack.

>
> Finally; I posted a bunch of outputs on reddit when I was getting
> help... but the experts disappeared.
> https://www.reddit.com/r/DataHoarder/comments/aws9iv/btrfs_shr_rookie_mistake_dd_bs1m/ehp5o2z

Well you have to understand it seems like a simple mistake. But this
is why professional data recovery can cost $20,000 for a "simple" RAID
array. I suggest you get a quote from DriveSavers so that you've got
some idea of the specialized knowledge and how tedious it can be to do
what you don't really understand. And then if someone on this list or
https://www.redhat.com/mailman/listinfo/linux-lvm or
http://vger.kernel.org/vger-lists.html#linux-btrfs manage to help you
figure it out you can make a donation to the Linux Foundation or
something.

Anyway, I'd expect that figuring this out will take about a week of
back and forth. Could get lucky and a search for this 1MB backup
happens in a day, but we need to know how big /dev/md3 is. Sounds like
from the reddit thread it might be Btrfs, I'm not sure why Btrfs would
be on LVM but whatever. It might be necessary to search for both an
LVM and Btrfs magic that's out of place (not where it's supposed to
be), in which case it's either stale or maybe it's your file.

Do you know if any of this is encrypted? Do you know if compression is
being used?

I'm willing to bet Synology support or engineers have a pretty good
idea what should be in the first 1MB of /dev/md3 so you have some idea
what you're looking for.

--
Chris Murphy



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux