Re: fio test rbd - single thread - qd1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/20/19 3:12 AM, Vitaliy Filippov wrote:
`cpupower idle-set -D 0` will help you a lot, yes.

However it seems that not only the bluestore makes it slow. >= 50% of the latency is introduced by the OSD itself. I'm just trying to understand WHAT parts of it are doing so much work. For example in my current case (with cpupower idle-set -D 0 of course) when I was testing a single OSD on a very good drive (Intel NVMe, capable of 40000+ single-thread sync write iops) it was delivering me only 950-1000 iops. It's roughly 1 ms latency, and only 50% of it comes from bluestore (you can see it `ceph daemon osd.x perf dump`)! I've even tuned bluestore a little, so that now I'm getting ~1200 iops from it. It means that the bluestore's latency dropped by 33% (it was around 1/1000 = 500 us, now it is 1/1200 = ~330 us). But still the overall improvement is only 20% - everything else is eaten by the OSD itself.


I'd suggest looking in the direction of pglog.  See:


https://www.spinics.net/lists/ceph-devel/msg38975.html


Back around that time I hacked pglog updates out of the code when I was testing a custom version of the memstore backend and saw some pretty dramatic reductions in CPU usage (and at least somewhat an increase in performance).  Unfortunately I think fixing it is going to be a big job, but it's high on my list of troublemakers.


Mark


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux