Re: RBD poor performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At some point I would expect the cpu to be the bottleneck. They have 
always been saying this here for better latency get fast cpu's. 
Would be nice to know what GHz you are testing, and how that scales. Rep 
1-3, erasure propably also takes a hit.
How do you test maximum iops of the osd? (Just curious, so I can test 
mine)

I have posted here a while ago a cephfs test on ssd rep 1. that was 
performing nowhere near native, asking if this was normal. But never got 
a response to it. I can remember that they send everyone a questionaire 
and asked if they should focus on performance more, now I wished I 
checked that box ;)




-----Original Message-----
From: Vitaliy Filippov [mailto:vitalif@xxxxxxxxxx] 
Sent: 27 February 2019 22:25
To: ceph-users@xxxxxxxxxxxxxx; Weird Deviations
Subject: *****SPAM***** Re:  RBD poor performance

To me it seems Ceph's iops limit is 10000 (maybe 15000 with BIS 
hardware) per an OSD. After that number it starts to get stuck on CPU.

I've tried to create a pool from 3 OSDs in loop devices over tmpfs and 
I've only got ~15000 iops :) good disks aren't the bottleneck, CPU is.

--
With best regards,
   Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux