Ceph with VMWare / XenServer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Uwe, 

could you please help me a bit with configuring multipathing on two different storage servers and connecting it to xenserver. 

I am looking at the multipathing howto and it tells me that for multipathing to work the iscsi querry from the target server should return two paths. However, if you have two separate servers with tgt installed, each one would only return a single path. 

I've configured two servers (tgt1 and tgt2) with tgt, each pointing to the same rbd image. the iscsi config files are identical. One server is using 192.168.170.200 ip, the second one uses 192.168.171.200. When doing a query, the tgt1, it returns: 


192.168.170.200:3260,1 iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 

and tgt2 returns: 

192.168.171.200:3260,1 iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 


According to the documentation, each server should return both paths, like this: 
192.168.170.200:3260,1 iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 
192.168.171.200:3260,1 iqn.2014-04.iscsi-ibstorage.arhont.com:xenserver-iscsi-export-10TB-1 


Is there a manual way of configuring multipathing? Or have I not created the tgt configs correctly? 

Cheers 

Andrei 

----- Original Message -----

From: "Uwe Grohnwaldt" <uwe@xxxxxxxxxxxxx> 
To: ceph-users at lists.ceph.com 
Sent: Monday, 12 May, 2014 12:57:48 PM 
Subject: Re: Ceph with VMWare / XenServer 

Hi, 

at the moment we are using tgt with RBD backend compiled from source on Ubuntu 12.04 and 14.04 LTS. We have two machines within two ip-ranges (e.g. 192.168.1.0/24 and 192.168.2.0/24). One machine in 192.168.1.0/24 and one machine in 192.168.2.0/24. The config for tgt is the same on both machines, they export the same rbd. This works well for XenServer. 

For VMWare you have to disable VAAI to use it with tgt (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665) If you don't disable it, ESXi becomes very slow and unresponsive. 

I think the problem is the iSCSI Write Same Support but I haven't tried which of the settings of VAAI is responsible for this behavior. 

Mit freundlichen Gr??en / Best Regards, 
-- 
Consultant 
Dipl.-Inf. Uwe Grohnwaldt 
Gutleutstr. 351 
60327 Frankfurt a. M. 

eMail: uwe at grohnwaldt.eu 
Telefon: +49-69-34878906 
Mobil: +49-172-3209285 
Fax: +49-69-348789069 

----- Original Message ----- 
> From: "Andrei Mikhailovsky" <andrei at arhont.com> 
> To: ceph-users at lists.ceph.com 
> Sent: Montag, 12. Mai 2014 12:00:48 
> Subject: Ceph with VMWare / XenServer 
> 
> 
> 
> Hello guys, 
> 
> I am currently running a ceph cluster for running vms with qemu + 
> rbd. It works pretty well and provides a good degree of failover. I 
> am able to run maintenance tasks on the ceph nodes without 
> interrupting vms IO. 
> 
> I would like to do the same with VMWare / XenServer hypervisors, but 
> I am not really sure how to achieve this. Initially I thought of 
> using iscsi multipathing, however, as it turns out, multipathing is 
> more for load balancing and nic/switch failure. It does not allow me 
> to perform maintenance on the iscsi target without interrupting 
> service to vms. 
> 
> Has anyone done either a PoC or better a production environment where 
> they've used ceph as a backend storage with vmware / xenserver? The 
> important element for me is to have the ability of performing 
> maintenance tasks and resilience to failovers without interrupting 
> IO to vms. Are there any recommendations or howtos on how this could 
> be achieved? 
> 
> Many thanks 
> 
> Andrei 
> 
> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users at lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users at lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140516/e80f1ba0/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux