Re: Recover data from accidentally created raid5 over raid1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good Morning everyone.

Thank god I have very good news, since I managed to get access to (hopefully most of) the data.
I want to share the path I took with you for further reference.

First thing I tried was to use "foremost" directly on the raid array (/dev/md0) that had no partition table. That was just partially successful since it ran *verry* slowly and produced mixed results. Many broken files, no dir structure, no filenames. But there were files that came out correctly (content wise) - so I had hope.

Following your advice I then start off on this page: https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID (chapter "Making the harddisks read-only using an overlay file"). With what you assumed, that the data should be there on the raw disks, I created overlays as explained. One note here: In step 3 the page used "blockdev --getsize ...." but in my version (2.36.1) that parameter is marked as "deprecated" an didn't work. I had to use "blockdev --getsz" instead.

Having the overlays I fiddled around and ended up using "testdisk", let it analyze the overlay device "/dev/mapper/sdX1". After selecting "EFI GPT" partition type, it found a partition table I was able to use. From here on the process was pretty straight forward. testdisk showed the disk's old file structure and I was able to copy the "lost" files to a backup hdd. The process is still running but spot tests were pretty promising that most of the data can be recovered.

When backup has finished. I will completely re-create the array and start with a clean setup :-)

Thanks again for your support, input and thoughts.

Best
Moritz

Am 2023-04-12 12:15, schrieb Moritz Rosin:
Hi Phil, et.al.,

first of all, thank y'all so much for your thoughts and answers.
I am going to add as much information as possible.

Am 12.04.2023 um 02:26 schrieb Phil Turmel:
Hi Moritz, et al,

On 4/11/23 20:18, Wol wrote:
On 11/04/2023 20:47, John Stoffel wrote:
"Moritz" == Moritz Rosin <moritz.rosin@xxxxxxxxxxxx> writes:

Hey there,
unfortunately I have to admit, that I learned my lesson the hard way
dealing with software raids.

I had a raid1 running reliable over month using two 4TB HDDs.
Since I ran short on free space I tried to convert the raid1 to a raid5
in-place (with the plan to add the 3rd HDD after converting).
That's where my incredibly stupid mistake kicked in.

I followed an internet tutorial that told me to do:
mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdX1 /dev/sdY1

Ewww.

Please share the link to the tutorial so we can maybe shame that
person into fixing it.  Or removing it.
I followed this tutorial, but found similar suggesting to use "mdadm -- create" -> https://dev.to/csgeek/converting-raid-1-to-raid-5-on-linux-file-systems-k73


See below. There's no reason why it shouldn't work, PROVIDED nothing has happened to the mirror since you created it.

I learned that I re-created a raid5 array instead of converting the
raid1 :-(

Indeed.  It would have sync'd every other chunk in opposite directions to place "parity" in the right rotation, but otherwise equivalent to a mirror.

Yeah, I think you're out of luck here. What kind of filesystem did
you have on your setup?  Were you using MD -> LVM -> filesystem stack?
Or just a raw filesystem on top of the /dev/md?  device?

I dunno. A two-disk raid-5 is the same as a 2-disk mirror. That raid-5 MAY just start and run and you'll be okay. You can try mounting it read-only and see what happens ...

The odds of matching offsets depends entirely on how old the original raid1 was.
The original raid1 was about 2 years old.
I was using ext4 directly ontop of the array.


Is there any chance to un-do the conversion or restore the data?
Has the process of creation really overwritten data or is there
anythins left on the disk itself that can be rescued?

If the conversion has overwritten the data, it will merely have overwritten one copy of the data with the other.

Concur.

If you have any information on your setup before you did this, then
you might be ok, but honestly, I think you're toast.

It might be a bit of a forensic job, but no I don't think so. Do you have that third 4TB HDD? If so, MAKE A BACKUP of one of the drives. That way, you'll have three copies to play with to try and recover the data.

This.

As John says, please give us all the information you can. If you've just put a file system on top of the array, you should now have three copies of the filesystem to try and recover. I can't help any further here. but all you have to do is track down the start of said filesystem, work out where you tell linux to start a partition so it correctly contains the filesystem, and then mount said partition. Your data should all be there.

The trick will be to determine the offset.  Please share as much information as possible as to the layering of the original setup, preferable with the fstab contents if available.
Unfortunately I have no output (e.g. of fdisk -l) _bevore_ converting the array. What I found in the syslogs is pasted here: https://pastebin.com/iktUtYyt
Output of lsblk: https://pastebin.com/LNyUizGq
Output of fdisk -l (after conversion): https://pastebin.com/LH6ngUjc -- as you can see there is no partition table exitent for "md0"
Output of "mdadm --detail": https://pastebin.com/vYhjphKY

Actually, you might be better off not copying onto drive 3. If you can work out where your filesystem partition should start, create a partition on drive 3 and copy the filesystem contents into said partition.

Or overlays with dmsetup.

I've cc'd a couple of people I hope can help, but basically, you need to find out where in the raid array your data has been put, and then work out how to access it. Your data SHOULD be recoverable, but you've got some detective work ahead of you.

Your odds are decent.  Again, share all the info you can.
Is there anything else I can provide?
What is a good point to start the detective work?



Cheers,
Wol

Phil
Thanks
Moritz



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux