Ceph with VMWare / XenServer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Uwe, do you mind sharing your storage and xenserver iscsi config files? 

Also, what is your performance like? 

Thanks 

----- Original Message -----

From: "Uwe Grohnwaldt" <uwe@xxxxxxxxxxxxx> 
To: ceph-users at lists.ceph.com 
Sent: Monday, 12 May, 2014 2:45:43 PM 
Subject: Re: Ceph with VMWare / XenServer 

Hi, 

yes, we use it in production. I can stop/kill the tgt on one server and XenServer goes to the second one. We enabled multipathing in xenserver. In our setup we haven't multiple ip-ranges so we scan/login the second target on xenserverstartup with iscsiadm in rc.local. 

Thats based on history - we used Dell Equallogic before ceph came in and there was no need to use multipathing (only LACP-channels). No we enabled multipathing and use tgt, but without diffent ip-ranges. 

Mit freundlichen Gr??en / Best Regards, 
-- 
Consultant 
Dipl.-Inf. Uwe Grohnwaldt 
Gutleutstr. 351 
60327 Frankfurt a. M. 

eMail: uwe at grohnwaldt.eu 
Telefon: +49-69-34878906 
Mobil: +49-172-3209285 
Fax: +49-69-348789069 

----- Original Message ----- 
> From: "Andrei Mikhailovsky" <andrei at arhont.com> 
> To: "Uwe Grohnwaldt" <uwe at grohnwaldt.eu> 
> Cc: ceph-users at lists.ceph.com 
> Sent: Montag, 12. Mai 2014 14:48:58 
> Subject: Re: Ceph with VMWare / XenServer 
> 
> 
> Uwe, thanks for your quick reply. 
> 
> Do you run the Xenserver setup on production env and have you tried 
> to test some failover scenarios to see if the xenserver guest vms 
> are working during the failover of storage servers? 
> 
> Also, how did you set up the xenserver iscsi? Have you used the 
> multipath option to set up the LUNs? 
> 
> Cheers 
> 
> 
> 
> 
> ----- Original Message ----- 
> 
> From: "Uwe Grohnwaldt" <uwe at grohnwaldt.eu> 
> To: ceph-users at lists.ceph.com 
> Sent: Monday, 12 May, 2014 12:57:48 PM 
> Subject: Re: Ceph with VMWare / XenServer 
> 
> Hi, 
> 
> at the moment we are using tgt with RBD backend compiled from source 
> on Ubuntu 12.04 and 14.04 LTS. We have two machines within two 
> ip-ranges (e.g. 192.168.1.0/24 and 192.168.2.0/24). One machine in 
> 192.168.1.0/24 and one machine in 192.168.2.0/24. The config for tgt 
> is the same on both machines, they export the same rbd. This works 
> well for XenServer. 
> 
> For VMWare you have to disable VAAI to use it with tgt 
> (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033665) 
> If you don't disable it, ESXi becomes very slow and unresponsive. 
> 
> I think the problem is the iSCSI Write Same Support but I haven't 
> tried which of the settings of VAAI is responsible for this 
> behavior. 
> 
> Mit freundlichen Gr??en / Best Regards, 
> -- 
> Consultant 
> Dipl.-Inf. Uwe Grohnwaldt 
> Gutleutstr. 351 
> 60327 Frankfurt a. M. 
> 
> eMail: uwe at grohnwaldt.eu 
> Telefon: +49-69-34878906 
> Mobil: +49-172-3209285 
> Fax: +49-69-348789069 
> 
> ----- Original Message ----- 
> > From: "Andrei Mikhailovsky" <andrei at arhont.com> 
> > To: ceph-users at lists.ceph.com 
> > Sent: Montag, 12. Mai 2014 12:00:48 
> > Subject: Ceph with VMWare / XenServer 
> > 
> > 
> > 
> > Hello guys, 
> > 
> > I am currently running a ceph cluster for running vms with qemu + 
> > rbd. It works pretty well and provides a good degree of failover. I 
> > am able to run maintenance tasks on the ceph nodes without 
> > interrupting vms IO. 
> > 
> > I would like to do the same with VMWare / XenServer hypervisors, 
> > but 
> > I am not really sure how to achieve this. Initially I thought of 
> > using iscsi multipathing, however, as it turns out, multipathing is 
> > more for load balancing and nic/switch failure. It does not allow 
> > me 
> > to perform maintenance on the iscsi target without interrupting 
> > service to vms. 
> > 
> > Has anyone done either a PoC or better a production environment 
> > where 
> > they've used ceph as a backend storage with vmware / xenserver? The 
> > important element for me is to have the ability of performing 
> > maintenance tasks and resilience to failovers without interrupting 
> > IO to vms. Are there any recommendations or howtos on how this 
> > could 
> > be achieved? 
> > 
> > Many thanks 
> > 
> > Andrei 
> > 
> > 
> > _______________________________________________ 
> > ceph-users mailing list 
> > ceph-users at lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users at lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users at lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140513/b9258143/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux