Re: RBD image perf counters: usage, access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yang,

> Do you mean get the perf counters via api? At first this counter is only for a particular ImageCtx (connected client), then you can
read the counters by the perf dump command in my last mail I think.
Yes, I did mean to get counters via API. And looks like I can adapt this admin-daemon command for my purposes. Thanks!

Having ceph-top would be just great and much more useful for me, yes. I'm glad there are some discussions about that and I didn't know about them. So thanks for pointing me out :)


On 27/03/17 15:38, Dongsheng Yang wrote:

On 03/27/2017 04:06 PM, Masha Atakova wrote:

Hi Yang,

Hi Masha,

Thank you for your reply. This is very useful indeed that there are many ImageCtx objects for one image.

But in my setting, I don't have any particular ceph client connected to ceph (I could, but this is not the point). I'm trying to get metrics for particular image while not performing anything with it myself.


The perf counter you mentioned in your first mail, is just for one particular image client, that means,
these perf counter will disappear as the client disconnected.

And I'm trying to get access to performance counters listed in the ImageCtx class, they don't seem to be reported by the perf tool.


Do you mean get the perf counters via api? At first this counter is only for a particular ImageCtx (connected client), then you can
read the counters by the perf dump command in my last mail I think.


If you want to get the performance counter for an image (no matter how many ImageCtx, connected or disconnected),
maybe you need to wait this one:
http://pad.ceph.com/p/ceph-top

Yang

Thanks!

On 27/03/17 12:29, Dongsheng Yang wrote:
Hi Masha
    you can get the counters by perf dump command on the asok file of your client. such as that:
$ ceph --admin-daemon out/client.admin.9921.asok perf dump|grep rd
        "rd": 656754,
        "rd_bytes": 656754,
        "rd_latency": {
        "discard": 0,
        "discard_bytes": 0,
        "discard_latency": {
        "omap_rd": 0,

But, note that, this is a counter of this one ImageCtx, but not the counter for this image. There are
possible several ImageCtxes reading or writing on the same image.

Yang

On 03/27/2017 12:23 PM, Masha Atakova wrote:

Hi everyone,

I was going around trying to figure out how to get ceph metrics on a more detailed level than daemons. Of course, I found and explored API for watching rados objects, but I'm more interested in getting metrics about RBD images. And while I could get list of objects for particular image, and then watch all of them, it doesn't seem like very efficient way to go about it.

I checked librbd API and there isn't anything helping with my goal.

So I went through the source code and found list of performance counters for image which are incremented by other parts of ceph when making corresponding operations: https://github.com/ceph/ceph/blob/master/src/librbd/ImageCtx.cc#L364

I have 2 questions about it:

1) is there any workaround to use those counters right now? maybe when compiling against ceph the code doing it. Looks like I need to be able to access particular ImageCtx object (instead of creating my own), and I just can't find appropriate class / part of the librbd allowing me to do so.

2) are there any plans on making those counters accessible via API like librbd or librados?

I see that these questions might be more appropriate for the devel list, but:

- it seems to me that question of getting ceph metrics is more interesting for those who use ceph

- I couldn't subscribe to it with an error provided below.

Thanks!

majordomo@xxxxxxxxxxxxxxx:
SMTP error from remote server for MAIL FROM command, host: vger.kernel.org (209.132.180.67) reason: 553 5.7.1 Hello [74.208.4.201], for your MAIL FROM address <masha.atakova@xxxxxxxx> policy analysis reported: Your address is not liked source for email


--- The header of the original message is following. ---

Received: from [192.168.1.10] ([223.206.146.181]) by mail.gmx.com (mrgmxus001
 [74.208.5.15]) with ESMTPSA (Nemesis) id 0M92q3-1d0LS03yov-00CTwW for
 <majordomo@xxxxxxxxxxxxxxx>; Mon, 27 Mar 2017 05:55:46 +0200
To: majordomo@xxxxxxxxxxxxxxx
From: Masha Atakova <masha.atakova@xxxxxxxx>
Subject: subscribe ceph-devel
Message-ID: <174d9bc0-b50d-fc80-ede8-5ba9d472e56a@xxxxxxxx>
Date: Mon, 27 Mar 2017 10:55:43 +0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101
 Thunderbird/45.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Provags-ID: V03:K0:Lau7llt7/MuJt+nRLjXIhY91IuCvCBJGtqDzxLgqkh2ERVkWeep
 5CDyh9GHW7QSodn80xWCPOOD2kvvnr6YxrB5R9SZ1iloI9VO2YoTXAauDq4mtWh+abwUOiY
 wQgj6YvUcLjfUinsh0t68Q9m3h3ufZIoKIeWhKFGbsRALqsvZjgWBVlaAR/V5Vt4O/wFJGG
 YULQ6/t4oDSsBuy4agFdQ==
X-UI-Out-Filterresults: notjunk:1;V01:K0:xLdjozptxu8=:nO7vxZvAbidrXk7gcv7Wqc
 Bjr14pXiTEv8gVIlRTZ78cNDEQthT557sAgBBRnJkDGXkP1efvEN2QqsZAzfa52Og4ysSFXub
 BPSiDOI0wkzxQMu1QHqWzvURobFX9LxrctwYB3k9nrOtHFgJwm0eQWfV1QKg7i0ESzT244u2c
 2xKpGGrhNUspJtEep97xjY3DyDvR3ApYx9x+RO9ZQAE0Is9AO0mBYqDR3NqrF1KzabJWuCA7I
 yu1y9N0QILgr/WmUf74qxeh1k20n+7yYuYPzgIl9Cm2vyrVu2ONUTJMpN2p+iUit8hhUsTuYQ
 /TNde22Q5OOCz+oGVhWq04J+CBP23VrEkent4kw2vhejDjQD/F2J4o2XkfkPt7ZqpMreGWBfB
 jtpfz4jHyp+voLlldhw7+cKUGY4ux8dihtlaCm9N3FQ2qvQ9CTsFuLsTNHNe7uRx5oeZgBFFh
 6t1OVBLlRR1wwSMDbx6vE5UTx47vbAtu5I/vyryQ1jVnzyQitjWE6iLMEC8faatMquOxJreoF
 4ALLNVStuHEkaGC0zimjQ5YkiFe6nHqxwsaYU7Vcy0j9GXTkiakh6kwluOyLqy5Q1e1FHPfSG
 /swFoOHGvb07bK81+G1OLT7nIIArC+NrsHGmsrycXpw9gvZGubLYoYSgRskhJ1F+QxCzspFK0
 XOgA5Ko3M3djFYkMM0S+xHHyVIIpUr4qQXv1sKuWUY63wlalu3JLwWn7t8CBhC2R0s/3ec0WT
 WD+iDs0hWe0INwfX+BNVWIuyzim7qKg8wbG95YWyAI9J9dyx7lv4VETd2Zf5raU1TgNFB/6OP
 RQrUx3O


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux