Re: RAID-1 from SAS to SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 30.07.19 um 13:07 schrieb Gandalf Corvotempesta:
> as I need more space and I have some free slots on the server, can I
> replace 1 by 1 (by adding a new disk and removing the old one from the
> array when done), with some SSDs ?
> 
> Something like this:
> 
> # Sync disk partitions
> sfdisk --dump /dev/sda | sfdisk /dev/sdc
> sfdisk --dump /dev/sdb | sfdisk /dev/sdd
> 
> # Rebuild array
> mdadm /dev/md0 --add /dev/sdc1
> mdadm /dev/md0 --replace /dev/sda1 --with /dev/sdc1
> mdadm /dev/md0 --add /dev/sdd1
> mdadm /dev/md0 --replace /dev/sdb1 --with /dev/sdd1
> 
> This should replace, with no loss of redundancy, sda with sdc and sdb with sdd.
> Then I have to re-install the bootloaded on new disks and reboot to
> run from the SSDs
> 
> Any thoughts ? What about LVM ?  By syncing the disk partitions and
> the undelying array, LVM should be up & running on next reboot
> automatically, even if moved from SAS to SSD, right?

mdraid don't care and LVM can't care because it sees just block devices

for BIOS setups i use that script for years now to clone the MBR and
install the bootloader on the replaced disk on several machines while my
home machine  got 2 out of 4 repalced with SSD two years ago and last
summer the remaining two where replaced by Samsung Evo860 2 TB

[root@srv-rhsoft:/downloads]$ LANG=C df
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md1       ext4   29G  7.4G   22G  26% /
/dev/md0       ext4  485M   44M  437M  10% /boot
/dev/md2       ext4  3.6T  1.7T  2.0T  46% /mnt/data

--------------------------

[root@srv-rhsoft:/downloads]$ cat /scripts/raid-recovery.sh
#!/usr/bin/bash

GOOD_DISK="/dev/sda"
BAD_DISK="/dev/sdd"

# --------------------------------------------------------------------------

echo "NOT NOW"
exit

# --------------------------------------------------------------------------

# clone MBR
dd if=$GOOD_DISK of=$BAD_DISK bs=512 count=1

# force OS to read partition tables
partprobe $BAD_DISK

# start RAID recovery
mdadm /dev/md0 --add ${BAD_DISK}1
mdadm /dev/md1 --add ${BAD_DISK}2
mdadm /dev/md2 --add ${BAD_DISK}3

# print RAID status on screen
sleep 5
cat /proc/mdstat

# install bootloader on replacement disk
grub2-install "$BAD_DISK"



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux