Re: clvmd leaving kernel dlm uncontrolled lockspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 06.06.13 13:06, schrieb matthew patton:
--- On Thu, 6/6/13, Andreas Pflug <pgadmin@pse-consulting.de> wrote:

On a machine being Xen host with 20+ running VMs I'd clearly
prefer to clean those orphaned memory space and go on.... I
This is exactly why it is STRONGLY suggested you split your storage tier from your compute tier. The lowest friction method would be a pair that hold the disks (or access a common disk set) and export it as NFS. The compute nodes can speed things up with CacheFS for their local running VMs assuming you shepherd the live-migration process.

The Xen hosts are iscsi initiators, but their usage of the san-located vg has to be coordinated, using clvmd. It's just what xcp/xenserver does, but with clvmd to insure locking (apparently xcp/xenserver relies on friendly behaviour, using no locking)

If the VMs all want to have a shared filesystem for a running app and the app can't be written to work safely with NFS (why not?) then you can run corosync and friends +GFS2 at that level.

The VMs have their private devices, each a LV on a san-vg.

Regards
Andreas

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux