Re: Get rbd performance stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, I don’t have access to the hypervisors, nor the vms on said hypervisors. Having some sort of ceph-top would be awesome, I wish they would implement that.

Thanks,
Matthew Stroud

On 9/29/17, 11:49 AM, "Jason Dillaman" <jdillama@xxxxxxxxxx> wrote:

    There is a feature in the backlog for a "rbd top"-like utility which
    could provide a probabilistic view of the top X% of RBD image stats
    against the cluster. The data collection would be by each OSD
    individually which it why it would be probabilistic stats instead of
    an absolute.  It also would only show IOPS, throughput, latency, etc
    from the point of view of individual OSDs aggregated together instead
    of the QEMU/librbd client. [1]

    Alternatively, if you could install performance metric collectors on
    the hypervisor hosts, there are several existing examples for pcp,
    collectd, diamond, etc which connect to the Ceph admin socket and
    directly request performance metrics. Since librbd can also expose the
    admin socket and its metrics, such a set of collectors could show the
    client-side performance stats.

    [1] http://pad.ceph.com/p/ceph-top

    On Fri, Sep 29, 2017 at 1:37 PM, Matthew Stroud
    <mattstroud@xxxxxxxxxxxxx> wrote:
    > Yeah, that is the core problem. I have been working with those teams that
    > manage those. However, there isn’t a way I can check on my side as it
    > appears.
    >
    >
    >
    > From: David Turner <drakonstein@xxxxxxxxx>
    > Date: Friday, September 29, 2017 at 11:08 AM
    > To: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>, Matthew Stroud
    > <mattstroud@xxxxxxxxxxxxx>
    > Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
    > Subject: Re:  Get rbd performance stats
    >
    >
    >
    > His dilemma sounded like he has access to the cluster, but not any of the
    > clients where the RBDs are used or even the hypervisors in charge of those.
    >
    >
    >
    > On Fri, Sep 29, 2017 at 12:03 PM Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:
    >
    > On 2017-09-29 17:13, Matthew Stroud wrote:
    >
    > Is there a way I could get a performance stats for rbd images? I'm looking
    > for iops and throughput.
    >
    >
    >
    > This issue we are dealing with is that there was a sudden jump in throughput
    > and I want to be able to find out with rbd volume might be causing it. I
    > just manage the ceph cluster, not the openstack hypervisors. I'm hoping I
    > can figure out the offending volume with the tool set I have.
    >
    >
    >
    > Thanks,
    >
    > Matthew Stroud
    >
    >
    >
    > ________________________________
    >
    >
    > CONFIDENTIALITY NOTICE: This message is intended only for the use and review
    > of the individual or entity to which it is addressed and may contain
    > information that is privileged and confidential. If the reader of this
    > message is not the intended recipient, or the employee or agent responsible
    > for delivering the message solely to the intended recipient, you are hereby
    > notified that any dissemination, distribution or copying of this
    > communication is strictly prohibited. If you have received this
    > communication in error, please notify sender immediately by telephone or
    > return email. Thank you.
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@xxxxxxxxxxxxxx
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    >
    >
    >
    > if you use a kernel mapped rbd image you should be able to get io stats from
    > most stats tools, it will show as a regular block device.
    >
    >
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@xxxxxxxxxxxxxx
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    >
    >
    > ________________________________
    >
    > CONFIDENTIALITY NOTICE: This message is intended only for the use and review
    > of the individual or entity to which it is addressed and may contain
    > information that is privileged and confidential. If the reader of this
    > message is not the intended recipient, or the employee or agent responsible
    > for delivering the message solely to the intended recipient, you are hereby
    > notified that any dissemination, distribution or copying of this
    > communication is strictly prohibited. If you have received this
    > communication in error, please notify sender immediately by telephone or
    > return email. Thank you.
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@xxxxxxxxxxxxxx
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    >



    --
    Jason




________________________________

CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux