Re: suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2016-07-01T19:11:34, Nick Fisk <nick@xxxxxxxxxx> wrote:

> To summarise,
> 
> LIO is just not working very well at the moment because of the ABORT Tasks problem, this will hopefully be fixed at some point. I'm not sure if SUSE works around this, but see below for other pain points with RBD + ESXi + iSCSI

Yes, the SUSE kernel has recent backports that fix these bugs. And
there's obviously on-going work to improve the performance and code.

That's not to say that I'd advocate iSCSI as a primary access mechanism
for Ceph. But the need to interface from non-Linux systems to a Ceph
cluster is unfortunately very real.

> With 1GB networking I think you will struggle to get your write latency much below 10-15ms, but from your example ~30ms is still a bit high. I wonder if the default queue depths on your iSCSI target are too low as well?

Thanks for all the insights on the performance issues. You're really
quite spot on.

The main concern here obviously is that the same 2x1GbE network is
carrying both the client/ESX traffic, the iSCSI target to OSD traffic,
and the OSD backend traffic. That is not advisable.


Regards,
    Lars

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux