[Single OSD performance on SSD] Can't go over 3, 2K IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi S?bastien,

On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote:
> Hey all,
(...)
> We have been able to reproduce this on 3 distinct platforms with some deviations (because of the hardware) but the behaviour is the same.
> Any thoughts will be highly appreciated, only getting 3,2k out of an 29K IOPS SSD is a bit frustrating :).

Yes,
it's the OSD code running thru ~20k lines of code on every IO using ~1000 Systems-Calls for 1 real IO.
Please have a look at the attached page, which I presented last Sept at SDC.

=> Ceph (v0.61) used 1600 mys for a 4k IO, SSD ~60 mys, Network ~20-200 mys
	(on v0.84 and all debug set to 0/0 it is still ~ 1100 mys)

=> so, tune or re-write the OSD code ... (I'm serious about this statement)


Mit freundlichen Gr??en / Best regards
Dieter 


> 
> Cheers.
> ???? 
> S?bastien Han 
> Cloud Architect 
> 
> "Always give 100%. Unless you're giving blood."
> 
> Phone: +33 (0)1 49 70 99 72 
> Mail: sebastien.han at enovance.com 
> Address : 11 bis, rue Roqu?pine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance 
> 


> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Ceph-OSD-TAT.pdf
Type: application/pdf
Size: 73460 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140829/a043eaae/attachment.pdf>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux