Re: rbd IO monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The "osd_perf_query" mgr module is just a demo / testing framework.
However, the output was tweaked prior to merge to provide more
readable values instead of the "{value summation} / {count}" in the
original submission.
On Tue, Dec 4, 2018 at 1:56 PM Michael Green <green@xxxxxxxxxxxxx> wrote:
>
> Interesting, thanks for sharing.
>
> I'm looking at the example output in the PR 25114:
>
> write_bytes
>  409600/107
>  409600/107
>
>  write_latency
> 2618503617/107
>
> How should these values be interpreted?
> --
> Michael Green
>
>
>
>
>
>
>
>
> > On Dec 3, 2018, at 2:47 AM, Jan Fajerski <jfajerski@xxxxxxxx> wrote:
> >
> >>  Question: what tools are available to monitor IO stats on RBD level?
> >>  That is, IOPS, Throughput, IOs inflight and so on?
> > There is some brand new code for rbd io monitoring. This PR (https://github.com/ceph/ceph/pull/25114) added rbd client side perf counters and this PR (https://github.com/ceph/ceph/pull/25358) will add those counters as prometheus metrics. There is also room for an "rbd top" tool, though I haven't seen any code for this.
> > I'm sure Mykola (the author of both PRs) could go into more detail if needed. I expect this functionality to land in nautilus.
> >
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux