Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, it's fighting for attention with a lot of other urgent stuff. :(

Anyway, even if you can't look up any details or reproduce at this
time, I'm sure you know what shape the cluster was (number of OSDs,
running on SSDs or hard drives, etc), and that would be useful
guidance. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wed, Jul 2, 2014 at 6:12 AM, Stefan Priebe - Profihost AG
<s.priebe at profihost.ag> wrote:
>
> Am 02.07.2014 15:07, schrieb Haomai Wang:
>> Could you give some perf counter from rbd client side? Such as op latency?
>
> Sorry haven't any counters. As this mail was some days unseen - i
> thought nobody has an idea or could help.
>
> Stefan
>
>> On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
>> <s.priebe at profihost.ag> wrote:
>>> Am 02.07.2014 00:51, schrieb Gregory Farnum:
>>>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
>>>> <s.priebe at profihost.ag> wrote:
>>>>> Hi Greg,
>>>>>
>>>>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>>>>> Sorry we let this drop; we've all been busy traveling and things.
>>>>>>
>>>>>> There have been a lot of changes to librados between Dumpling and
>>>>>> Firefly, but we have no idea what would have made it slower. Can you
>>>>>> provide more details about how you were running these tests?
>>>>>
>>>>> it's just a normal fio run:
>>>>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
>>>>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
>>>>> --runtime=90 --numjobs=32 --direct=1 --group
>>>>>
>>>>> Running one time with firefly libs and one time with dumpling libs.
>>>>> Traget is always the same pool on a firefly ceph storage.
>>>>
>>>> What's the backing cluster you're running against? What kind of CPU
>>>> usage do you see with both? 25k IOPS is definitely getting up there,
>>>> but I'd like some guidance about whether we're looking for a reduction
>>>> in parallelism, or an increase in per-op costs, or something else.
>>>
>>> Hi Greg,
>>>
>>> i don't have that test cluster anymore. It had to go into production
>>> with dumpling.
>>>
>>> So i can't tell you.
>>>
>>> Sorry.
>>>
>>> Stefan
>>>
>>>> -Greg
>>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo at vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux