Re: VMware + CEPH Integration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 15 Jun 2017, at 10:29, Osama Hasebou <osama.hasebou@xxxxxx> wrote:
> 
> We would like to start testing using VMware with CEPH storage. Can people share their experience with production ready ideas they tried and if they were successful? 

We are doing this with 4 OSD nodes (44 OSDs total), 3 separate monitor servers and two separate iSCSI gateways to serve up to our 3-machine ESX cluster. We use a pair of 10Gb switches with every OSD/server/gateway attached to both.

It generally works pretty well: iSCSI multipathing works nicely, so load is spread across the gateways (in VMWare settings you tell the servers how many commands to send to one gateway before switching to the other, I think we use around 100).

We do, however, run into the classic “LIO gets itself in knots if I/O is delayed" bug every so often, e.g. if an OSD goes down. See for example all the quoted messages at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-November/005957.html

I’m very much looking forward to the proper ceph backend support for LIO tcmu + librbd iSCSI which I think is planned and might even get to be production ready in Luminous? Can any devs confirm?

(We would use the more reliable plain old NFS, but AFAIK it’s rather hard to multipath that.)

Oliver.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux