I recently had to replace a bad disk in raid1 array and finding proper
docs was not a good experience.
There is apparently also a trap with not having a bootloader installed
on the replacement disk which can make the system unbootable in cause of
array degradation. Some of the guides don't mention this, I just
happened to stumble upon this information on serverfault.
I came up with these steps, can someone verify this is the correct way
with no hidden troubles lurking somewhere?
/dev/sde - remaining healthy disk
/dev/sdf - bad disk
/dev/sdd - new disk
# remove bad disk
mdadm --fail /dev/md127 /dev/sdf1
mdadm --remove /dev/md127 /dev/sdf1
# copy partition map from good disk to new disk
sfdisk -d /dev/sde | sfdisk /dev/sdd
# verify the new disk has boot, raid flags and /dev/sdd1 is same size as
/dev/sde1
parted /dev/sdd print
# remove raid metadata
mdadm --zero-superblock /dev/sdd1
# install bootloader on new disk
grub2-install /dev/sdd1
# add to the mirror
mdadm --grow /dev/md127 --add /dev/sdd1 --raid-devices=2
# watch it rebuild
watch -n 5 -d cat /proc/mdstat
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure