Re: Explanation of perf dump of rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 1, 2019 at 2:31 AM Sinan Polat <sinan@xxxxxxxx> wrote:
>
> Thanks for the clarification!
>
> Great that the next release will include the feature. We are running on Red Hat Ceph, so we might have to wait longer before having the feature available.
>
> Another related (simple) question:
> We are using
> /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
> in ceph.conf, can we include the volume name in the path?

Unfortunately there are no metavariables to translate down to the pool
and/or image names. The pool and image names are available within the
perf metrics dump parent object, but you would need to check all RBD
asok files for the correct image if you weren't planning to scrape all
the sockets periodically.

> Sinan
>
> > Op 1 feb. 2019 om 00:44 heeft Jason Dillaman <jdillama@xxxxxxxxxx> het volgende geschreven:
> >
> >> On Thu, Jan 31, 2019 at 12:16 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> >>
> >> "perf schema" has a description field that may or may not contain
> >> additional information.
> >>
> >> My best guess for these fields would be bytes read/written since
> >> startup of this particular librbd instance. (Based on how these
> >> counters usually work)
> >
> > Correct -- they should be strictly increasing while the image is
> > in-use. If you periodically scrape the values (along w/ the current
> > timestamp), you can convert these values to the rates between the
> > current and previous metrics.
> >
> > On a semi-related subject: the forthcoming Nautilus release will
> > include new "rbd perf image iotop" and "rbd perf image iostat"
> > commands to monitor metrics by RBD image.
> >
> >> Paul
> >>
> >> --
> >> Paul Emmerich
> >>
> >> Looking for help with your Ceph cluster? Contact us at https://croit.io
> >>
> >> croit GmbH
> >> Freseniusstr. 31h
> >> 81247 München
> >> www.croit.io
> >> Tel: +49 89 1896585 90
> >>
> >>> On Thu, Jan 31, 2019 at 3:41 PM Sinan Polat <sinan@xxxxxxxx> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I finally figured out how to measure the statistics of a specific RBD volume;
> >>>
> >>> $ ceph --admin-daemon <path to .asok> perf dump
> >>>
> >>>
> >>> It outputs a lot, but I don't know what it means, is there any documentation about the output?
> >>>
> >>> For now the most important values are:
> >>>
> >>> - bytes read
> >>>
> >>> - bytes written
> >>>
> >>>
> >>> I think I need to look at this:
> >>>
> >>> {
> >>> "rd": 1043,
> >>> "rd_bytes": 28242432,
> >>> "rd_latency": {
> >>> "avgcount": 1768,
> >>> "sum": 2.375461133,
> >>> "avgtime": 0.001343586
> >>> },
> >>> "wr": 76,
> >>> "wr_bytes": 247808,
> >>> "wr_latency": {
> >>> "avgcount": 76,
> >>> "sum": 0.970222300,
> >>> "avgtime": 0.012766082
> >>> }
> >>> }
> >>>
> >>>
> >>> But what is 28242432 (rd_bytes) and 247808 (wr_bytes). Is that 28242432 bytes read and 247808 bytes written during the last minute/hour/day? Or is it since mounted, or...?
> >>>
> >>>
> >>> Thanks!
> >>>
> >>>
> >>> Sinan
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@xxxxxxxxxxxxxx
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Jason
>


-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux