Re: Ceph benchmark tool (cbt)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is what the "use_existing" flag is for (on by default).  It short-circuits initialize() which is what actually does the whole shutdown/creation/startup procedure.


https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L149-L151


That is invoked before shutdown() and make_osds():


https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L159

https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L181


This is how a lot of folks are using cbt now with vstart or to test existing clusters deployed in the field.  When use_existing is disabled, cbt can build temporary multi-node bare metal test clusters for you by directly invoking the daemons via pdsh (it actually pre-dates ceph-deploy/ceph-ansible/etc).  The entire OSD structure is created in /tmp to hopefully convince people that this is only intended to be used for testing.  The code you are seeing to tear down old clusters and create new ones is for that path.  The advantage of this is that cbt has been very resilient to some of the churn over the years as different installer methods have come and gone.  The downside is that sometimes there are changes that require updates (for instance cbt created clusters still use msgr v1, which hopefully will change soon to support deploying crimson OSDs that require v2).


Mark


On 12/11/20 7:18 AM, Marc Roos wrote:
Just run the tool from a client that is not part of the ceph nodes. Than
it can do nothing, that you did not configure ceph to allow it to do ;)
Besides you should never run software from 'unknown' sources in an
environment where it can use 'admin' rights.

-----Original Message-----
To: ceph-users
Subject:  Ceph benchmark tool (cbt)

Hi all,

I want to benchmark my production cluster with cbt. I read a bit of the
code and I see something strange in it, for example, it's going to
create ceph-osd by it selves (
https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L373) and also
shutdown the whole cluster!! (
https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L212)

Is there any configuration to not do harmful things with the cluster and
for example just test the read_ahead_kb or simply stop some OSDs and
other things that can be reverted and not get the cluster fully down?!

Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux