Re: Need help Recover raid5 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am at a loss.  I tried setting up an overlay with the 'overlay
manipulation functions' as a script.  First time touching that, but i
think it is working correctly.  I then wipefs --all --types
pmbr,gpt,dos /dev/sd{a,b,c,e}.  I wanted to tack on a file system
label of 'linux_raid_memeber' but dont know how.  Then I did :

sudo mdadm --create /dev/md2 --assume-clean     --level=5 --chunk=512K
--metadata=1.2 --data-offset=257024s     --raid-devices=5
/dev/mapper/sda /dev/mapper/sdb /dev/mapper/sdc /dev/mapper/sdd
/dev/mapper/sde
mdadm: /dev/mapper/sdd appears to be part of a raid array:
       level=raid5 devices=5 ctime=Fri Nov 16 13:20:25 2018
mdadm: partition table exists on /dev/mapper/sdd but will be lost or
       meaningless after creating array
Continue creating array? y
mdadm: array /dev/md2 started.
thecompguru@bushserver:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md2 : active raid5 dm-4[4] dm-3[3] dm-2[2] dm-1[1] dm-0[0]
      39065233408 blocks super 1.2 level 5, 512k chunk, algorithm 2
[5/5] [UUUUU]
      bitmap: 0/73 pages [0KB], 65536KB chunk

unused devices: <none>
thecompguru@bushserver:~$ sudo mount /dev/md2 /media/raid
mount: /media/raid: wrong fs type, bad option, bad superblock on
/dev/md2, missing codepage or helper program, or other error.
thecompguru@bushserver:~$ sudo mdadm --stop /dev/md2

Is this normal and looks as expected?  Am I doing this right?  Do I
need to do this 120 times changing the drive order till it shows up as
working?  I need some hand holding or some more step by step because I
am just not sure what to do.

Is it possible to do some kind of dd snip and copy out some parts of
the good drive to get mdadm to look for the superblock or whatever
it's needing from the other drives?

Check me on the overlay as well.  I just copied the 2 functions and
added a line at the bottom into a .sh executable script and ran with
sudo.
****
devices="/dev/sda /dev/sdb /dev/sdc"

overlay_create()
{
        free=$((`stat -c '%a*%S/1024/1024' -f .`))
        echo free ${free}M
        overlays=""
        overlay_remove
        for d in $devices; do
                b=$(basename $d)
                size_bkl=$(blockdev --getsz $d) # in 512 blocks/sectors
                # reserve 1M space for snapshot header
                # ext3 max file length is 2TB
                truncate -s$((((size_bkl+1)/2)+1024))K $b.ovr || (echo
"Do you use ext4?"; return 1)
                loop=$(losetup -f --show -- $b.ovr)
                #
https://www.kernel.org/doc/Documentation/device-mapper/snapshot.txt
                dmsetup create $b --table "0 $size_bkl snapshot $d $loop P 8"
                echo $d $((size_bkl/2048))M $loop /dev/mapper/$b
                overlays="$overlays /dev/mapper/$b"
        done
        overlays=${overlays# }
}

overlay_remove()
{
        for d in $devices; do
                b=$(basename $d)
                [ -e /dev/mapper/$b ] && dmsetup remove $b && echo
/dev/mapper/$b
                if [ -e $b.ovr ]; then
                        echo $b.ovr
                        l=$(losetup -j $b.ovr | cut -d : -f1)
                        echo $l
                        [ -n "$l" ] && losetup -d $(losetup -j $b.ovr
| cut -d : -f1)
                        rm -f $b.ovr &> /dev/null
                fi
        done
}
overlay_create
****

My only way to proceed right now would be to run the overlay_create
and I assume that starts me fresh again on drive changes?  I then try
creating the array again with a different drive order?  Not really
very feasible.  Can I determine the order placement of the intact
drive in any way?  Then that's like 24 possible arrangement options
instead of 120.

Thanks for any help.

On Sun, Dec 19, 2021 at 6:58 AM Andreas Klauer
<Andreas.Klauer@xxxxxxxxxxxxxx> wrote:
>
> On Sat, Dec 18, 2021 at 11:31:39PM -0500, Tony Bush wrote:
> > I forgot that this new-to-this-system SSD had Windows 10 OS on
> > it and I believe it tried to boot while I was working on hooking up my
> > monitor.  So I think that it saw my raid drives and tried to fdisk
> > them.  I did mdadm directly to drive and not to a partition(big
> > mistake I know now).  So I think the drives were seen as corrupted and
> > fdisk corrected the formatting.
>
> Windows is known to do this but it can just as well happen within Linux.
> Hopefully no filesystem formatting took place...
>
> > To fix, I have been leaning toward making the drives ready only and
> > using an overlay file. Like here:
> > https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file
>
> This method is so useful there should be standard command in Linux
> to create and manage overlays; but there is none so you have to make do
> with the 'overlay manipulation functions' as shown in the wiki.
>
> > But i dont understand all the commands well enough to work this for my
> > situation.  Seems like since I don't know the original drive
> > arrangement that may be adding an additional level of complexity.  If
> > I can figure out the read only and overlay, I still don't know exactly
> > the right way to proceed on the mdadm front.  Please anyone who has a
> > handle on a situation like this, let me know what I should do.  Thanks
>
> I summarized `mdadm --create` for data recovery here:
>
>   https://unix.stackexchange.com/a/131927/30851
>
> In addition you should remove the bogus GPT and MBR partition headers.
> You can use 'wipefs' for this task. (Test it with overlays first...)
>
>   wipefs --all --types pmbr,gpt,dos /dev/...
>
> You are lucky to have all the relevant `mdadm --examine` output,
> so you already know the correct data offset and only need to guess
> the correct order of drives.
>
> Regards
> Andreas Klauer



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux