Re: krbd blk-mq support ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could you describe more about 2x70000 iops?
So you mean 8 OSD each backend with SSD can achieve with 14w iops?
is it read or write? could you give fio options?

On Fri, Oct 31, 2014 at 12:01 AM, Alexandre DERUMIER
<aderumier@xxxxxxxxx> wrote:
>>>I'll try to add more OSD next week, if it's scale it's a very good news !
>
> I just tried to add 2 more osds,
>
> I can now reach 2x 70000 iops on 2 client nodes (vs 2 x 50000 previously).
>
> and kworker cpu usage is also lower (84% vs 97%).
> (don't understand why exactly)
>
> So, Thanks for help everybody !
>
>
>
>
>
> ----- Mail original -----
>
> De: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>
> À: "Sage Weil" <sage@xxxxxxxxxxxx>
> Cc: "Christoph Hellwig" <hch@xxxxxxxxxxxxx>, "Ceph Devel" <ceph-devel@xxxxxxxxxxxxxxx>
> Envoyé: Jeudi 30 Octobre 2014 09:11:11
> Objet: Re: krbd blk-mq support ?
>
>>>Hmm, this is probably the messenger.c worker then that is feeding messages
>>>to the network. How many OSDs do you have? It should be able to scale
>>>with the number of OSDs.
>
> Thanks Sage for your reply.
>
> Currently 6 OSD (ssd) on the test platform.
>
> But I can reach 2x 50000iops on same rbd volume with 2 clients on 2 differents host.
> Do you think messenger.c worker can be the bottleneck in this case ?
>
>
> I'll try to add more OSD next week, if it's scale it's a very good news !
>
>
>
>
>
>
>
> ----- Mail original -----
>
> De: "Sage Weil" <sage@xxxxxxxxxxxx>
> À: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>
> Cc: "Christoph Hellwig" <hch@xxxxxxxxxxxxx>, "Ceph Devel" <ceph-devel@xxxxxxxxxxxxxxx>
> Envoyé: Mercredi 29 Octobre 2014 16:00:56
> Objet: Re: krbd blk-mq support ?
>
> On Wed, 29 Oct 2014, Alexandre DERUMIER wrote:
>> >>Oh, that's without the blk-mq patch?
>>
>> Yes, sorry, I don't how to use perf with a custom compiled kernel.
>> (Usualy I'm using perf from debian, with linux-tools package provided with the debian kernel package)
>>
>> >>Either way the profile doesn't really sum up to a fully used up cpu.
>>
>> But I see mostly same behaviour with or without blk-mq patch, I have always 1 kworker at around 97-100%cpu (1core) for 50000iops.
>>
>> I had also tried to map the rbd volume with nocrc, it's going to 60000iops with same kworker at around 97-100%cpu
>
> Hmm, this is probably the messenger.c worker then that is feeding messages
> to the network. How many OSDs do you have? It should be able to scale
> with the number of OSDs.
>
> sage
>
>
>>
>>
>>
>> ----- Mail original -----
>>
>> De: "Christoph Hellwig" <hch@xxxxxxxxxxxxx>
>> ?: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>
>> Cc: "Ceph Devel" <ceph-devel@xxxxxxxxxxxxxxx>
>> Envoy?: Mardi 28 Octobre 2014 19:07:25
>> Objet: Re: krbd blk-mq support ?
>>
>> On Mon, Oct 27, 2014 at 11:00:46AM +0100, Alexandre DERUMIER wrote:
>> > >>Can you do a perf report -ag and then a perf report to see where these
>> > >>cycles are spent?
>> >
>> > Yes, sure.
>> >
>> > I have attached the perf report to this mail.
>> > (This is with kernel 3.14, don't have access to my 3.18 host for now)
>>
>> Oh, that's without the blk-mq patch?
>>
>> Either way the profile doesn't really sum up to a fully used up
>> cpu. Sage, Alex - are there any ordring constraints in the rbd client?
>> If not we could probably aim for per-cpu queues using blk-mq and a
>> socket per cpu or similar.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux