Re: "ceph fs perf stats" and "cephfs-top" don't work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 15, 2021 at 5:18 PM Eugen Block <eblock@xxxxxx> wrote:
>
> Hi,
>
> I just setup a virtual one-node cluster (16.2.5) to check out
> cephfs-top. Regarding the number of clients I was a little surprised,
> too, in the first couple of minutes the number switched back and forth
> between 0 and 1 although I had not connected any client yet. But after
> a while the number got stable and correct. I have two clients
> connected now, but I don't see any stats despite having the stats
> module enabled:

The "(dis)appearing" client is the libcephfs instance in mgr/volumes.
When mgr/volumes cleans up its connection, you would see the client
count drop (and increase when it starts instantiating connections).

>
> ---snip---
> cephfs-top - Thu Jul 15 13:35:41 2021
> Client(s): 2 - 0 FUSE, 0 kclient, 2 libcephfs
>
>    client_id mount_root chit(%) rlat(s) wlat(s) mlat(s) dlease(%)
> ofiles oicaps oinodes mount_point@host/addr
>    24835     /          N/A     N/A     N/A     N/A     N/A       N/A
>    N/A    N/A     N/A@pacific/v1:192.168.124.35
>    24846     /client    N/A     N/A     N/A     N/A     N/A       N/A
>    N/A    N/A     N/A@host-192-168-124-168/v1:192.168.124.168
> ---snip---
>
>
> The command 'ceph fs perf stats' also only shows this:
>
> ---snip---
> pacific:~ # ceph fs perf stats
> {"version": 1, "global_counters": ["cap_hit", "read_latency",
> "write_latency", "metadata_latency", "dentry_lease", "opened_files",
> "pinned_icaps", "opened_inodes"], "counters": [], "client_metadata":
> {"client.24835": {"IP": "v1:192.168.124.35", "hostname": "pacific",
> "root": "/", "mount_point": "N/A"}, "client.24846": {"IP":
> "v1:192.168.124.168", "hostname": "host-192-168-124-168", "root":
> "/client", "mount_point": "N/A"}}, "global_metrics": {"client.24835":
> [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
> "client.24846": [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0,
> 0], [0, 0]]}, "metrics": {"delayed_ranks": [], "mds.0":
> {"client.24835": [], "client.24846": []}}}
> ---snip---

The bunch of "N/A"s are due to the client metadata not having entries
for which metrics are valid (sent) by the client. Normally, you should
see something like::

{
  "version": 1,
  "global_counters": [
    "cap_hit",
    "read_latency",
    "write_latency",
    "metadata_latency",
    "dentry_lease",
    "opened_files",
    "pinned_icaps",
    "opened_inodes"
  ],
  "counters": [],
  "client_metadata": {
    "client.624141": {
      "IP": "X.X.X.X",
      "hostname": "host1",
      "root": "/",
      "mount_point": "/mnt/cephfs",
      "valid_metrics": [
        "cap_hit",
        "read_latency",
        "write_latency",
        "metadata_latency",
        "dentry_lease",
        "opened_files",
        "pinned_icaps",
        "opened_inodes"
      ]
    },

What does "ceph tell mds.<rank0 id> client ls" dump?

And, as Jos mentioned, it takes a couple of seconds for the stats to
show up when run afresh.

>
> although I have written a couple of GB into the cephfs.
>
> Regards,
> Eugen
>
>
> Zitat von Erwin Bogaard <erwin.bogaard@xxxxxxxxx>:
>
> > Hi,
> >
> > I just upgraded our cluster to pacific 16.2.5.
> > As I'm curious what cephfs-top could give for insights, I followed the
> > steps in the documentation.
> > After enabling the mgr module "stats":
> >
> > # ceph mgr module ls
> > ...
> >     "enabled_modules": [
> >         "dashboard",
> >         "iostat",
> >         "restful",
> >         "stats",
> >         "zabbix"
> > ...
> >
> > I tried the following command:
> > # ceph fs perf stats
> > {"version": 1, "global_counters": ["cap_hit", "read_latency",
> > "write_latency", "metadata_latency", "dentry_lease", "opened_files",
> > "pinned_icaps", "opened_inodes"], "counters": [], "client_metadata": {},
> > "global_metrics": {}, "metrics": {"delayed_ranks": []}}
> >
> > As you can see, this returns no info whatsoever. The same with:
> >
> > # cephfs-top
> > cluster ceph does not exist
> >
> > The actual cluster name is "ceph".
> >
> > So I don't understand why "ceph fs perf stats" isn't showing any
> > information.
> > Maybe another indicator something isn't ritght:
> >
> > # ceph fs status
> > cephfs - 0 clients
> > ======
> > RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS
> > ...
> >
> > I see "0 clients". When I take a look in the mgr dashboard, I can actually
> > see all clients. Which are RHEL 7 & 8 cephfs kernel clients.
> > There is only 1 mds active, and 1 in standby-replay.
> > I have multiple pools active, but only 1 fs.
> >
> > Does anyone have a suggestion where I can take a look enable gathering the
> > stats?
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Cheers,
Venky

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux