Re: higher read iop/s for single thread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 10.09.2015 um 17:20 schrieb Mark Nelson:
I'm not sure you will be able to get there with firefly.  I've gotten
close to 1ms after lots of tuning on hammer, but 0.5ms is probably not
likely to happen without all of the new work that
Sandisk/Fujitsu/Intel/Others have been doing to improve the data path.

Your best bet is probably going to be a combination of:

1) switch to jemalloc (and make sure you have enough RAM to deal with it)
2) disabled ceph auth
3) disable all logging
4) throw a high clock speed CPU at the OSDs and keep the number of OSDs
per server lowish (will need to be tested to see where the sweet spot is).
5) potentially implement some kind of scheme to make sure OSD threads
stay pinned to specific cores.
6) lots of investigation to make sure the kernel/tcp stack/vm/etc isn't
getting in the way.

Thanks will do so. The strange thing currently is that an iotop shows more threads involved (4-6). ANd fio can easily reach 5000 iop/s reading with 4 threads doing 16k randread. So currently i don't understand the difference in workload.

Stefan


Mark

On 09/10/2015 08:34 AM, Stefan Priebe - Profihost AG wrote:
Hi,

while we're happy running ceph firefly in production and also reach
enough 4k read iop/s for multithreaded apps (around 23 000) with qemu
2.2.1.

We've now a customer having a single threaded application needing around
2000 iop/s but we don't go above 600 iop/s in this case.

Any tuning hints for this case?

Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux