Snapshots on clustered LVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Currently we are using LVM as backing storage for our DRBD disks in HA set-ups. We use QEMU instances on our node's using (local) DRBD targets for storage. This enables us to do live migrations between the DRBD primary/secondary nodes.

We want to support iSCSI targergets in our HA enviroment. We are trying to see if we can use (c)lvm for that by creating a volume group of our iSCSI block devices and use that volume group on all nodes to create logical volumes. This seems to work fine if we handle locking etc properly and make sure we only activate the logical volumes on one node at a time. As long as we only have a volume active on one node snapshots seem to work fine also.

However, we run into problems when we want to perform a live migration of a running QEMU instance. In order to do a live migration we have to start a second similar QEMU on the node we want to migrate to and start a QEMU live migration. In order for us to do that we have to make the logical volume active on the target node otherwise we can't start the QEMU instance. During the live migration QEMU ensures that data is only written on one node (e.g. during the live migration data will be written on the source node, QEMU wil then pause the instance for a short while when copying the last data and will then continue the instance on the target node).

This use case works fine with a clustered LVM set-up except for snapshots. Changes are not saved in the snapshot when the logical volume is active on both nodes (as expected if the manual is correct: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html-single/Logical_Volume_Manager_Administration/#snapshot_volumes).

If we are correct it means we can use lvm for as clustered "file system" but can't trust our snapshots to be 100% reliable if a volume group has been made active on more then one node. E.G. when doing a live migration between two nodes of a QEMU instance our snapshots become unreliable.

Are these conclusions correct? Is there a solution for this problem or is this simply a known limitation of clustered lvm without a work-around?

-- 
Met vriendelijke groet / Kind regards,
Bram Klein Gunnewiek | Shock Media B.V.

Tel: +31 (0)546 - 714360
Fax: +31 (0)546 - 714361
Web: https://www.shockmedia.nl/
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux