Hi, at the moment we are using tgt with RBD backend compiled from source on Ubuntu 12.04 and 14.04 LTS. We have two machines within two ip-ranges (e.g. 192.168.1.0/24 and 192.168.2.0/24). One machine in 192.168.1.0/24 and one machine in 192.168.2.0/24. The config for tgt is the same on both machines, they export the same rbd. This works well for XenServer. For VMWare you have to disable VAAI to use it with tgt (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665) If you don't disable it, ESXi becomes very slow and unresponsive. I think the problem is the iSCSI Write Same Support but I haven't tried which of the settings of VAAI is responsible for this behavior. Mit freundlichen Gr??en / Best Regards, -- Consultant Dipl.-Inf. Uwe Grohnwaldt Gutleutstr. 351 60327 Frankfurt a. M. eMail: uwe at grohnwaldt.eu Telefon: +49-69-34878906 Mobil: +49-172-3209285 Fax: +49-69-348789069 ----- Original Message ----- > From: "Andrei Mikhailovsky" <andrei at arhont.com> > To: ceph-users at lists.ceph.com > Sent: Montag, 12. Mai 2014 12:00:48 > Subject: Ceph with VMWare / XenServer > > > > Hello guys, > > I am currently running a ceph cluster for running vms with qemu + > rbd. It works pretty well and provides a good degree of failover. I > am able to run maintenance tasks on the ceph nodes without > interrupting vms IO. > > I would like to do the same with VMWare / XenServer hypervisors, but > I am not really sure how to achieve this. Initially I thought of > using iscsi multipathing, however, as it turns out, multipathing is > more for load balancing and nic/switch failure. It does not allow me > to perform maintenance on the iscsi target without interrupting > service to vms. > > Has anyone done either a PoC or better a production environment where > they've used ceph as a backend storage with vmware / xenserver? The > important element for me is to have the ability of performing > maintenance tasks and resilience to failovers without interrupting > IO to vms. Are there any recommendations or howtos on how this could > be achieved? > > Many thanks > > Andrei > > > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >