Hi All. Forgive me if this has been dealt with before - I couldn't find a search tool for the archives... I've just had a play with LVM mirroring. It appears to have a rather serious usability issue. I added a new drive to my system, created a PV, then used lvconvert -m1 to mirror my various volumes. Sure enough, the new PV eventually contained my data. As far as I knew, I now had failure resilience at the LVM layer. So I simulated a failure - I powered down the machine and removed the new drive. And then it all went wrong. The boot sequence went through Grub, then failed. It could not find my root FS. Swapping drives to the other one did exactly the same - neither disk from my "mirror" was bootable. After a bit of poking at it, I found that the LVM system would not start the VG with a missing PV. Even after starting it manually (and mounting it, and looking at it) at a recovery shell (dracut? I'm not sure), I still couldn't get the machine booted. I fixed it eventually by booting into a LiveUSB. The only thing I could get to work was to vgreduce --removemissing --force each of the disks. This is clearly not the sort of thing we ought to be doing for something as simple as a disk failure... So it is that using LVM mirrors, we end up with the machine being rendered unbootable if either disk fails. That can't be right, can it? Vic. p.s. The reason I want to do this with LVM mirrors rather than MD RAID is that this is potentially a very useful way of grabbing a backup snapshot of a machine prior to some risky operation - it allows me to "fork" a machine onto two sets of media. But if it leaves the result unbootable, that's something of a problem... _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/