RE: rados bench single instance vs. multiple instances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Mark Nelson [mailto:mnelson@xxxxxxxxxx]
> Sent: Wednesday, May 13, 2015 11:17 AM
> To: Deneau, Tom; Sage Weil
> Cc: ceph-devel
> Subject: Re: rados bench single instance vs. multiple instances
> 
> On 05/13/2015 10:05 AM, Deneau, Tom wrote:
> >
> >
> >> -----Original Message-----
> >> From: Sage Weil [mailto:sage@xxxxxxxxxxxx]
> >> Sent: Monday, May 11, 2015 12:04 PM
> >> To: Deneau, Tom
> >> Cc: ceph-devel
> >> Subject: Re: rados bench single instance vs. multiple instances
> >>
> >> On Mon, 11 May 2015, Deneau, Tom wrote:
> >>> I have noticed the following while running rados bench seq read
> >>> tests with a 40M object size
> >>>
> >>>      single rados bench, 4 concurrent ops,                     bandwidth
> =
> >> 190 MB/s
> >>>      4 copies of rados bench, 1 concurrent op each,  aggregate
> >>> bandwidth = 310 MB/s
> >>>
> >>> and in fact the single rados bench seems limited to 190, no matter
> >>> how many
> >> concurrent ops.
> >>>
> >>> I don't see this kind of behavior with a 4M object size.
> >>>
> >>> (The above are with caches dropped on the osd targets)
> >>>
> >>> It doesn't seem to be related to the total number of bytes being
> >>> processed by the single because if I don't drop the caches, both the
> >>> single rados bench and the 4-copy rados bench get much higher
> >>> numbers
> >>> (600 vs. 900) but still the single rados bench appears limited, no
> >>> matter
> >> how many concurrent ops are used.
> >>>
> >>> Is there kind of throttling going on by design here?
> >>
> >> It might be the librados throttles:
> >>
> >> OPTION(objecter_inflight_op_bytes, OPT_U64, 1024*1024*100) // max
> >> in-flight data (both directions)
> >> OPTION(objecter_inflight_ops, OPT_U64, 1024)               // max in-
> flight
> >> ios
> >>
> >> IIRC these only affect librados.. which would include 'rados bench'.
> >>
> >> sage
> >>
> >
> > Just a follow-up that changing the limits of the two options mentioned
> > above did indeed solve my problem.
> 
> Yay!  I suspect max bytes more so than max ops?
> 
> 
> Mark
> 

Yes, we were really only hitting the limit of inflight bytes.
(rather easy to do since the object size I was trying was 40M).

Another question: 

I had noticed previously that rados bench (since it does its writes as a single request
for the entire length of the object) is limited by osd_max_write_size to objects of 90MB or less.

If I wanted to experiment with larger object sizes, should I expect any negative
side effects from increasing osd_max_write_size?

-- Tom

> >
> > Also, my naïve understanding of the architecture was that things like
> > RBD and RGW were layered on librados as shown in
> > http://ceph.com/docs/master/architecture/.  So wouldn't these throttles
> apply to those stacks as well?
> >
> > -- Tom Deneau
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo
> > info at  http://vger.kernel.org/majordomo-info.html
> >
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux