Hi to all. I have the following raid1: /dev/md0: Version : 1.2 Creation Time : Tue Dec 27 21:35:37 2016 Raid Level : raid1 Array Size : 292836608 (279.27 GiB 299.86 GB) Used Dev Size : 292836608 (279.27 GiB 299.86 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Jul 30 12:58:34 2019 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : x:0 (local to host x) UUID : d8599926:69a5c35a:66a167d4:5a464a7b Events : 68628 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 17 1 active sync /dev/sdb1 # pvs PV VG Fmt Attr PSize PFree /dev/md0 vg1 lvm2 a-- 279.27g 120.27g # vgs VG #PV #LV #SN Attr VSize VFree vg1 1 7 0 wz--n- 279.27g 120.27g # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_boot vg1 -wi-ao---- 1.00g lv_log vg1 -wi-ao---- 50.00g lv_mysql vg1 -wi-ao---- 20.00g lv_root vg1 -wi-ao---- 20.00g lv_swap vg1 -wi-ao---- 8.00g lv_tmp vg1 -wi-ao---- 10.00g lv_www vg1 -wi-ao---- 50.00g as I need more space and I have some free slots on the server, can I replace 1 by 1 (by adding a new disk and removing the old one from the array when done), with some SSDs ? Something like this: # Sync disk partitions sfdisk --dump /dev/sda | sfdisk /dev/sdc sfdisk --dump /dev/sdb | sfdisk /dev/sdd # Rebuild array mdadm /dev/md0 --add /dev/sdc1 mdadm /dev/md0 --replace /dev/sda1 --with /dev/sdc1 mdadm /dev/md0 --add /dev/sdd1 mdadm /dev/md0 --replace /dev/sdb1 --with /dev/sdd1 This should replace, with no loss of redundancy, sda with sdc and sdb with sdd. Then I have to re-install the bootloaded on new disks and reboot to run from the SSDs Any thoughts ? What about LVM ? By syncing the disk partitions and the undelying array, LVM should be up & running on next reboot automatically, even if moved from SAS to SSD, right ?