Re: CBT: New RGW getput benchmark and testing diary

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just based on what I saw during these tests, it looks to me like a lot more time was spent dealing with civetweb's threads than RGW. I didn't look too closely, but it may be worth looking at whether there's any low hanging fruit in civetweb itself.

Mark

On 02/06/2017 09:33 AM, Matt Benjamin wrote:
Thanks for the detailed effort and analysis, Mark.

As we get closer to the L time-frame, it should become relevant to look at the relative boost::asio frontend rework i/o paths, which are the open effort to reduce CPU overhead/revise threading model, in general.

Matt

----- Original Message -----
From: "Mark Nelson" <mnelson@xxxxxxxxxx>
To: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, cbt@xxxxxxxxxxxxxx
Cc: "Mark Seger" <mjseger@xxxxxxxxx>, "Kyle Bader" <kbader@xxxxxxxxxx>, "Karan Singh" <karan@xxxxxxxxxx>, "Brent
Compton" <bcompton@xxxxxxxxxx>
Sent: Monday, February 6, 2017 12:55:20 AM
Subject: CBT: New RGW getput benchmark and testing diary

Hi All,

Over the weekend I took a stab at improving our ability to run RGW
performance tests in CBT.  Previously the only way to do this was to use
the cosbench plugin, which required a fair amount of additional
setup and while quite powerful can be overkill in situations where you
want to rapidly iterate over tests looking for specific issues.  A while
ago Mark Seger from HP told me he had created a swift benchmark called
"getput" that is written in python and is much more convenient to run
quickly in an automated fashion.  Normally getput is used in conjunction
with gpsuite, a tool for coordinating benchmarking multiple getput
processes.  This is how you would likely use getput on a typical ceph or
swift cluster, but since CBT builds the cluster and has it's own way for
launching multiple benchmark processes, it uses getput directly.



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux