Re: Need help Recover raid5 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 28, 2021 at 06:56:35PM -0500, Tony Bush wrote:
> I then mounted the drive as my user, tested multiple large files and
> when i was done crying i unmounted

:-)

> My next task is creating a new raid that is inside of partitions.

Now that you know how to create overlays and verified the data 
is still there... if you want partitions you can do the whole 
thing one more time.

Your data offset is 257024 sectors so you can take 2048 sectors 
off that to create a partition at 1 MiB offset.

Since GPT has a partition table backup at end of drive, you might 
also have to shrink the filesystem on the RAID just a little to 
make room for that.

Test it all on overlays until you make it work...

> Need to do this with 2 new drives, transfer 10TB, then shrink and
> remove a drive to add to new raid, transfer and repeat.

> Let me know if this is a bad idea please.

Growing is way more common than the other way around.
Things can go mysteriously wrong when shrinking stuff. 

As an alternative to the overlay method mentioned above, 
if you do not mind re-syncing, you could also mdadm --replace 
your full disk members with partitioned members.

It will complain about the device "not large enough to join array" 
so you still have to shrink the filesystem just a little and shrink 
the RAID itself accordingly until the device is not too small anymore.

The main issue is that mdadm's --max-size and --size options can 
be difficult to deal with. So things can go wrong here, too.

Test it on a separate array (loop devices) first...

# mdadm --grow /dev/md100 --array-size=max
(no output, check dmesg or /proc/mdstat for size)
      25132032 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
# blockdev --getsize64 /dev/md100
25735200768

### shrink filesystem on /dev/md100 to 25100000K

# mdadm --grow /dev/md100 --array-size=25100000K

### double check that filesystem is still OK here

# mdadm /dev/md100 --add /dev/loop5
mdadm: /dev/loop5 not large enough to join array
# mdadm --grow /dev/md100 --size=max
mdadm: component size of /dev/md100 unchanged at 8377344K
# mdadm --grow /dev/md100 --size=8377343K
mdadm: component size of /dev/md100 has been set to 8376832K
# mdadm /dev/md100 --add /dev/loop5
mdadm: /dev/loop5 not large enough to join array
# mdadm --grow /dev/md100 --size=8376831K
mdadm: component size of /dev/md100 has been set to 8376320K
# mdadm /dev/md100 --add /dev/loop5
mdadm: added /dev/loop5
# mdadm /dev/md100 --replace /dev/loop3

# cat /proc/mdstat
md100 : active raid5 loop5[6](R) loop4[5] loop3[4] loop2[2] loop1[1] loop0[0](F)
      25100000 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [=========>...........]  recovery = 45.4% (3806592/8376320) finish=0.3min speed=200346K/sec

# mdadm --grow /dev/md100 --size=max
# mdadm --grow /dev/md100 --array-size=max
# resize2fs /dev/md100

Something like this, might maybe work if you had to do it without overlays.

Not sure if there is a simpler way right now.

Regards
Andreas Klauer



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux