Re: RAID-1 from SAS to SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/07/19 12:07, Gandalf Corvotempesta wrote:
> # Sync disk partitions
> sfdisk --dump /dev/sda | sfdisk /dev/sdc
> sfdisk --dump /dev/sdb | sfdisk /dev/sdd
> 
> # Rebuild array
> mdadm /dev/md0 --add /dev/sdc1
> mdadm /dev/md0 --replace /dev/sda1 --with /dev/sdc1
> mdadm /dev/md0 --add /dev/sdd1
> mdadm /dev/md0 --replace /dev/sdb1 --with /dev/sdd1
> 
> This should replace, with no loss of redundancy, sda with sdc and sdb with sdd.
> Then I have to re-install the bootloaded on new disks and reboot to
> run from the SSDs
> 
> Any thoughts ? What about LVM ?  By syncing the disk partitions and
> the undelying array, LVM should be up & running on next reboot
> automatically, even if moved from SAS to SSD, right ?

Something has come up on the list recently, called dm-integrity. It's
worth thinking about.

Once you've created your sdc1 and sdd1, consider creating a dm-integrity
device on them, and then adding that dm-integrity device into your mirror.

It's another layer in the stack, but it check-sums and ensures read
integrity. So should something *corrupt* one of your mirrors, the
dm-integrity layer will fail with a read error, and the raid will
correct the mirror for you. The alternative is a raid that returns
correct and corrupt data at random, until you do an integrity check,
which will toss a dice whether it overwrites the corrupt data with the
correct data, or vice versa.

Please note that dm-integrity is NEW, so use it at your own risk ...

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux