Re: Get rbd performance stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There is no tool on the Ceph side to see which RBDs are doing what.  Generally you need to monitor the mount points for the RBDs to track that down with iostat or something.

That said, there are some tricky things you could probably do to track down the RBD that is doing a bunch of stuff (as long as it isn't just read iops).  If you snapshot an RBD, you can see how much data on the RBD is changing from the time you snapshot it.  That way you could watch the snapshot sizes and know which RBDs are writing the most data.  Depending on your version of Ceph, snapshots may be fairly well handled or very inefficient in their clean-up.  Double check which rbd snapshot settings are available in your cluster version before accidentally creating an even worse slow-down in your cluster than you were trying to track down.  This of course also depends on how large your cluster is, how many RBDs you have, and how many snapshots you create/delete at the same time.  OTOH, in your openstack deployment, they might be doing regular snapshots now and you can just look at the snapshots in existence and track it there.

Another, even messier, option would be to look at the full rados ls of the pool and categorize the objects by the rbd header.  Then later compare another rados ls to the first and see how much changed for each rbd in between.

On Fri, Sep 29, 2017 at 11:13 AM Matthew Stroud <mattstroud@xxxxxxxxxxxxx> wrote:

Is there a way I could get a performance stats for rbd images? I’m looking for iops and throughput.

 

This issue we are dealing with is that there was a sudden jump in throughput and I want to be able to find out with rbd volume might be causing it. I just manage the ceph cluster, not the openstack hypervisors. I’m hoping I can figure out the offending volume with the tool set I have.

 

Thanks,

Matthew Stroud




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux