Starting a RADI1 array after the filesystem has being used through LVM?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

one of my servers uses two iSCSI volumes in a RAID1 array with LVM on top. Apparently the RADI1 array wasn't properly started on the last reboot, but LVM still detected both partitions on physical disks as PV's and simply used one of them (seen as /dev/sdb1).

Unfortunately, this gone by unnoticed and has been runnig like that for some time.

The size of the volume is 7TB (with 5TB used), so I'd like to do with as little downtime as possible.

The question is - is it safe to simply stop LVM, start the array in degraded mode with the 'current' disk (/dev/sdb1), start LVM and then re-add the 'non-current' one (/dev/sda1)?

Also - how can I prevent LVM from using these two partitions directly? I'd rather see the server startup fail then silently do wrong... I already have the 'md_component_detection = 1' option in my 'lvm.conf'.



   Thanks, Danilo

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux