Re: higher read iop/s for single thread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 11, 2015 at 9:52 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
>> Mark Nelson
>> Sent: 10 September 2015 16:20
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  higher read iop/s for single thread
>>
>> I'm not sure you will be able to get there with firefly.  I've gotten
> close to 1ms
>> after lots of tuning on hammer, but 0.5ms is probably not likely to happen
>> without all of the new work that Sandisk/Fujitsu/Intel/Others have been
>> doing to improve the data path.
>
> Hi Mark, is that for 1 or 2+ copies? Fast SSD's I assume?
>
> What's the best you can get with HDD's + SSD Journals?
>
> Just out of interest I tried switching a small test cluster to use jemalloc
> last night, its only 4 HDD OSD's with SSD journals. But I didn't see any
> improvement over tcmalloc at 4kb IO, but I guess this is expected at this
> end of the performance spectrum. However what I did notice is that at 64kb
> IO size jemalloc was around 10% slower than tcmalloc. I can do a full sweep
> of IO sizes to double check this if it would be handy? Might need to be
> considered if jemalloc will be default going forwards.

Mark, have you run any tests like this on more standard hardware? I
haven't heard anything like this but if jemalloc is also *slower* on
more standard systems then that'll definitely put the kibosh on
switching to it.
-Greg "fighting the good fight" ;)

>
>
>>
>> Your best bet is probably going to be a combination of:
>>
>> 1) switch to jemalloc (and make sure you have enough RAM to deal with it)
>> 2) disabled ceph auth
>> 3) disable all logging
>> 4) throw a high clock speed CPU at the OSDs and keep the number of OSDs
>> per server lowish (will need to be tested to see where the sweet spot is).
>> 5) potentially implement some kind of scheme to make sure OSD threads
>> stay pinned to specific cores.
>> 6) lots of investigation to make sure the kernel/tcp stack/vm/etc isn't
> getting
>> in the way.
>>
>> Mark
>>
>> On 09/10/2015 08:34 AM, Stefan Priebe - Profihost AG wrote:
>> > Hi,
>> >
>> > while we're happy running ceph firefly in production and also reach
>> > enough 4k read iop/s for multithreaded apps (around 23 000) with qemu
>> 2.2.1.
>> >
>> > We've now a customer having a single threaded application needing
>> > around
>> > 2000 iop/s but we don't go above 600 iop/s in this case.
>> >
>> > Any tuning hints for this case?
>> >
>> > Stefan
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux