Re: fio test rbd - single thread - qd1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> `cpupower idle-set -D 0` will help you a lot, yes.
>
> However it seems that not only the bluestore makes it slow. >= 50% of the
> latency is introduced by the OSD itself. I'm just trying to understand
> WHAT parts of it are doing so much work. For example in my current case
> (with cpupower idle-set -D 0 of course) when I was testing a single OSD on
> a very good drive (Intel NVMe, capable of 40000+ single-thread sync write
> iops) it was delivering me only 950-1000 iops. It's roughly 1 ms latency,
> and only 50% of it comes from bluestore (you can see it `ceph daemon osd.x
> perf dump`)! I've even tuned bluestore a little, so that now I'm getting
> ~1200 iops from it. It means that the bluestore's latency dropped by 33%
> (it was around 1/1000 = 500 us, now it is 1/1200 = ~330 us). But still the
> overall improvement is only 20% - everything else is eaten by the OSD
> itself.


Thanks for the insight - that means that the SSD-number for read/write
performance are roughly ok - I guess.

It still puzzles me why the bluestore-caching does not benefit
the read-size.

Is the cache not an LRU cache on the block device or is it actually uses for
something else?

Jesper

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux