Re: rados bench single instance vs. multiple instances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 11, 2015 at 10:15 AM, Deneau, Tom <tom.deneau@xxxxxxx> wrote:
>
>
>> -----Original Message-----
>> From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
>> Sent: Monday, May 11, 2015 12:04 PM
>> To: Deneau, Tom
>> Cc: ceph-devel
>> Subject: Re: rados bench single instance vs. multiple instances
>>
>> On Mon, 11 May 2015, Deneau, Tom wrote:
>> > I have noticed the following while running rados bench seq read tests
>> > with a 40M object size
>> >
>> >     single rados bench, 4 concurrent ops,                     bandwidth =
>> 190 MB/s
>> >     4 copies of rados bench, 1 concurrent op each,  aggregate
>> > bandwidth = 310 MB/s
>> >
>> > and in fact the single rados bench seems limited to 190, no matter how many
>> concurrent ops.
>> >
>> > I don't see this kind of behavior with a 4M object size.
>> >
>> > (The above are with caches dropped on the osd targets)
>> >
>> > It doesn't seem to be related to the total number of bytes being
>> > processed by the single because if I don't drop the caches, both the
>> > single rados bench and the 4-copy rados bench get much higher numbers
>> > (600 vs. 900) but still the single rados bench appears limited, no matter
>> how many concurrent ops are used.
>> >
>> > Is there kind of throttling going on by design here?
>>
>> It might be the librados throttles:
>>
>> OPTION(objecter_inflight_op_bytes, OPT_U64, 1024*1024*100) // max in-flight
>> data (both directions)
>> OPTION(objecter_inflight_ops, OPT_U64, 1024)               // max in-flight
>> ios
>>
>> IIRC these only affect librados.. which would include 'rados bench'.
>>
>> sage
>>
>
> I noticed those, and tried setting them higher but it didn't seem to have an effect.
> But maybe I didn't change it correctly.
>
> What I did:
>    * stop all osds
>    * add the following lines to ceph.conf
>      [osd]
>         objecter_inflight_op_bytes = 1048576000

This is a client-side configuration, so putting it in the OSD section
won't have any effect.

Do note that even without this, having 1x4 requests outstanding is not
quite the same as having 4x1 requests outstanding — a single client
only gets one thread to communicate with each OSD, so the cluster has
a bit less parallelism when outstanding requests map to the same OSD.
But I don't think that should be too obvious at this point.
-Greg

>    * restart all osds
>    * admin-daemon osd config show shows the new value
>    * made sure /etc/ceph/ceph.conf on the client also had those lines added
>       (although I was not sure if that was needed?)
>
> -- Tom
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux