Re: Degraded RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 16.10.19 um 23:01 schrieb Curtis Vaughan:
> Switched out bad hard drive and added a brand new one.
> 
> Now I thought I should just run:
> 
> sudo mdadm --add /dev/md1 /dev/sda2
> sudo mdadm --add /dev/md1 /dev/sda1
> 
> and the raid would be back up and running (RAID1, btw). But I think it
> won't add sda1 or sda2 cause they don't exist. So it seems I need to
> first partition the drive? But how do I partition EXACTLY like the
> other? Or is there another way?

if the disks are *not* GPT it's easy, the script below is from a 4 disk
RAID10 and the exit is there by intention to not call it by accident

[root@srv-rhsoft:~]$ df
Dateisystem    Typ  Größe Benutzt Verf. Verw% Eingehängt auf
/dev/md1       ext4   29G    7,3G   22G   26% /
/dev/md0       ext4  485M     47M  435M   10% /boot
/dev/md2       ext4  3,6T    1,7T  2,0T   46% /mnt/data

[root@srv-rhsoft:~]$ cat /scripts/raid-recovery.sh
#!/usr/bin/bash

GOOD_DISK="/dev/sda"
BAD_DISK="/dev/sdd"

# --------------------------------------------------------------------------

echo "NOT NOW"
exit

# --------------------------------------------------------------------------

# clone MBR
dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1

# force OS to read partition tables
partprobe $BAD_DISK

# start RAID recovery
mdadm /dev/md0 --add ${BAD_DISK}1
mdadm /dev/md1 --add ${BAD_DISK}2
mdadm /dev/md2 --add ${BAD_DISK}3

# print RAID status on screen
sleep 5
cat /proc/mdstat

# install bootloader on replacement disk
grub2-install "$BAD_DISK"



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux