Re: What do you use to benchmark your rgw?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/28/2018 11:11 AM, Mark Nelson wrote:
> Personally I usually use a modified version of Mark Seger's getput
> tool here:
>
> https://github.com/markhpc/getput/tree/wip-fix-timing
>
> The difference between this version and upstream is primarily to make
> getput more accurate/useful when using something like CBT for
> orchestration instead of the included orchestration wrapper (gpsuite).
>
> CBT can use this version of getput and run relatively accurate
> mutli-client tests without requiring quite as much setup as cosbench. 
> Having said that, many folks have used cosbench effectively and I
> suspect that might be a good option for many people.  I'm not sure how
> much development is happening these days, I think the primary author
> may no longer be working on the project.
>

AFAIK the project is still alive. Adding Mark.

Mohamad


> Mark
>
> On 03/28/2018 09:21 AM, David Byte wrote:
>> I use cosbench (the last rc works well enough). I can get multiple
>> GB/s from my 6 node cluster with 2 RGWs.
>>
>> David Byte
>> Sr. Technical Strategist
>> IHV Alliances and Embedded
>> SUSE
>>
>> Sent from my iPhone. Typos are Apple's fault.
>>
>> On Mar 28, 2018, at 5:26 AM, Janne Johansson <icepic.dz@xxxxxxxxx
>> <mailto:icepic.dz@xxxxxxxxx>> wrote:
>>
>>> s3cmd and cli version of cyberduck to test it end-to-end using
>>> parallelism if possible.
>>>
>>> Getting some 100MB/s at most, from 500km distance over https against
>>> 5*radosgw behind HAProxy.
>>>
>>>
>>> 2018-03-28 11:17 GMT+02:00 Matthew Vernon <mv3@xxxxxxxxxxxx
>>> <mailto:mv3@xxxxxxxxxxxx>>:
>>>
>>>     Hi,
>>>
>>>     What are people here using to benchmark their S3 service (i.e.
>>>     the rgw)?
>>>     rados bench is great for some things, but doesn't tell me about
>>> what
>>>     performance I can get from my rgws.
>>>
>>>     It seems that there used to be rest-bench, but that isn't in Jewel
>>>     AFAICT; I had a bit of a look at cosbench but it looks fiddly to
>>>     set up
>>>     and a bit under-maintained (the most recent version doesn't work
>>>     out of
>>>     the box, and the PR to fix that has been languishing for a while).
>>>
>>>     This doesn't seem like an unusual thing to want to do, so I'd
>>> like to
>>>     know what other ceph folk are using (and, if you like, the
>>>     numbers you
>>>     get from the benchmarkers)...?
>>>
>>>     Thanks,
>>>
>>>     Matthew
>>>
>>>
>>>     --
>>>      The Wellcome Sanger Institute is operated by Genome Research
>>>      Limited, a charity registered in England with number 1021457 and a
>>>      company registered in England with number 2742969, whose
>>> registered
>>>      office is 215 Euston Road, London, NW1 2BE.
>>>     _______________________________________________
>>>     ceph-users mailing list
>>>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>>>
>>>
>>> -- 
>>> May the most significant bit of your life be positive.
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux