Re: rados bench single instance vs. multiple instances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 11, 2015 at 10:25 AM, Deneau, Tom <tom.deneau@xxxxxxx> wrote:
>
>
>> -----Original Message-----
>> From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
>> Sent: Monday, May 11, 2015 12:18 PM
>> To: Deneau, Tom
>> Cc: Sage Weil; ceph-devel
>> Subject: Re: rados bench single instance vs. multiple instances
>>
>> On Mon, May 11, 2015 at 10:15 AM, Deneau, Tom <tom.deneau@xxxxxxx> wrote:
>> >
>> >
>> >> -----Original Message-----
>> >> From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
>> >> Sent: Monday, May 11, 2015 12:04 PM
>> >> To: Deneau, Tom
>> >> Cc: ceph-devel
>> >> Subject: Re: rados bench single instance vs. multiple instances
>> >>
>> >> On Mon, 11 May 2015, Deneau, Tom wrote:
>> >> > I have noticed the following while running rados bench seq read
>> >> > tests with a 40M object size
>> >> >
>> >> >     single rados bench, 4 concurrent ops,                     bandwidth
>> =
>> >> 190 MB/s
>> >> >     4 copies of rados bench, 1 concurrent op each,  aggregate
>> >> > bandwidth = 310 MB/s
>> >> >
>> >> > and in fact the single rados bench seems limited to 190, no matter
>> >> > how many
>> >> concurrent ops.
>> >> >
>> >> > I don't see this kind of behavior with a 4M object size.
>> >> >
>> >> > (The above are with caches dropped on the osd targets)
>> >> >
>> >> > It doesn't seem to be related to the total number of bytes being
>> >> > processed by the single because if I don't drop the caches, both
>> >> > the single rados bench and the 4-copy rados bench get much higher
>> >> > numbers
>> >> > (600 vs. 900) but still the single rados bench appears limited, no
>> >> > matter
>> >> how many concurrent ops are used.
>> >> >
>> >> > Is there kind of throttling going on by design here?
>> >>
>> >> It might be the librados throttles:
>> >>
>> >> OPTION(objecter_inflight_op_bytes, OPT_U64, 1024*1024*100) // max
>> >> in-flight data (both directions)
>> >> OPTION(objecter_inflight_ops, OPT_U64, 1024)               // max in-
>> flight
>> >> ios
>> >>
>> >> IIRC these only affect librados.. which would include 'rados bench'.
>> >>
>> >> sage
>> >>
>> >
>> > I noticed those, and tried setting them higher but it didn't seem to have
>> an effect.
>> > But maybe I didn't change it correctly.
>> >
>> > What I did:
>> >    * stop all osds
>> >    * add the following lines to ceph.conf
>> >      [osd]
>> >         objecter_inflight_op_bytes = 1048576000
>>
>> This is a client-side configuration, so putting it in the OSD section won't
>> have any effect.
>>
>> Do note that even without this, having 1x4 requests outstanding is not quite
>> the same as having 4x1 requests outstanding — a single client only gets one
>> thread to communicate with each OSD, so the cluster has a bit less
>> parallelism when outstanding requests map to the same OSD.
>> But I don't think that should be too obvious at this point.
>> -Greg
>>
>
> OK, on the client side, I just put it in the [global] section.
> That seemed to do the trick.
>
> (I assume nothing else needs to be done for the client side).
>
> What was the reason for this limit?
> Should I expect any undesirable side effects from changing this value?

The limit is there to prevent undue memory usage by the Ceph clients;
it's just a feedback throttle for anybody putting in data. If you've
got the memory to allow increased usage, there's no downside.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux