> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Lars Marowsky-Bree > Sent: 04 July 2016 11:36 > To: ceph-users@xxxxxxxxxxxxxx > Subject: Re: > suse_enterprise_storage3_rbd_LIO_vmware_performance_bad > > On 2016-07-01T19:11:34, Nick Fisk <nick@xxxxxxxxxx> wrote: > > > To summarise, > > > > LIO is just not working very well at the moment because of the ABORT > > Tasks problem, this will hopefully be fixed at some point. I'm not > > sure if SUSE works around this, but see below for other pain points > > with RBD + ESXi + iSCSI > > Yes, the SUSE kernel has recent backports that fix these bugs. And there's > obviously on-going work to improve the performance and code. > > That's not to say that I'd advocate iSCSI as a primary access mechanism for > Ceph. But the need to interface from non-Linux systems to a Ceph cluster is > unfortunately very real. > > > With 1GB networking I think you will struggle to get your write latency > much below 10-15ms, but from your example ~30ms is still a bit high. I > wonder if the default queue depths on your iSCSI target are too low as well? > > Thanks for all the insights on the performance issues. You're really quite spot > on. Thanks, it's been a painful experience working through them all, but have learnt a lot along the way. > > The main concern here obviously is that the same 2x1GbE network is carrying > both the client/ESX traffic, the iSCSI target to OSD traffic, and the OSD > backend traffic. That is not advisable. > > > Regards, > Lars > > -- > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) "Experience is the name everyone gives to their > mistakes." -- Oscar Wilde > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com