Re: Block storage performance test tool - would like to merge into cbt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Probably depends how much time it will take. I'd say if we think it might be more than 15 minutes of discussion we should wait until the end of the performance meeting and then talk about it. If it's fairly quick though we could probably add it to the perf meeting itself.

Mark

On 07/14/2015 03:11 AM, Konstantin Danilov wrote:
Mark,

does Wednesday performance meeting is a good place for discussion, or
we need a separated one?

On Mon, Jul 13, 2015 at 6:16 PM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
Hi Konstantin,

I'm definitely interested in looking at your tools and seeing if we can
merge them into cbt!  One of the things we lack right now in cbt is any kind
of real openstack integration.  Right now CBT basically just assumes you've
already launched VMs and specified them as clients in the yaml, so being
able to spin up VMs in a standard way would be very useful.  It might be
worth exploring if we can use your tool to make the cluster base class
"openstack aware" so that any of the eventual cluster classes (ceph, and
maybe some day gluster, swift, etc) can use it to launch VMs or do other
things.  I'd really love to be able to create a cbt yaml config file and
iterate through parametric configuration parameters building multiple
different clusters and running tests against them with system monitoring and
data post processing happening automatically.

The data post processing is also something that will be very useful.  We
have a couple of folks really interested in this area as well.

Mark




On 07/11/2015 03:02 AM, Konstantin Danilov wrote:

Hi all,

We(Mirantis ceph team) have a tool for block storage performance test,
called 'wally' -
https://github.com/Mirantis/disk_perf_test_tool.

It has some nice features, like:

* Openstack and FUEL integration (can spawn VM for tests, gather HW info,
etc)
* Set of tests, joined into suit, which measures different performnce
aspects and
creates joined report, as example -
http://koder-ua.github.io/6.1GA/cinder_volume_iscsi.html,
VM running on ceph drives report example -
http://koder-ua.github.io/random/ceph_example.html
* Data postrocessing - confidence intervals, etc

We would like to merge our code into cbt. Do you interesting in it?
Can we discuss a way to merge?

Thanks





--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux