Hi,
please find my 'client ls' output at the end.
I was quite patient, I waited for an hour or two, and tried it again
this morning. I copy a couple of GB into cephfs which takes quite a
while so there is something like a constant stream of data. This is
also visible in the daemonperf output, of course, the perf dump also
shows valid data.
Regards,
Eugen
---snip---
pacific:~ # ceph tell mds.0 client ls
2021-07-16T08:37:21.987+0200 7f9715ffb700 0 client.27311
ms_handle_reset on v2:192.168.124.35:6800/3365614954
2021-07-16T08:37:22.071+0200 7f9716ffd700 0 client.27313
ms_handle_reset on v2:192.168.124.35:6800/3365614954
[
{
"id": 24846,
"entity": {
"name": {
"type": "client",
"num": 24846
},
"addr": {
"type": "v1",
"addr": "192.168.124.168:0",
"nonce": 3813501997
}
},
"state": "open",
"num_leases": 0,
"num_caps": 4450,
"request_load_avg": 5130,
"uptime": 68589.679999999993,
"requests_in_flight": 189,
"num_completed_requests": 0,
"num_completed_flushes": 0,
"reconnecting": false,
"recall_caps": {
"value": 0,
"halflife": 60
},
"release_caps": {
"value": 2203.5658328262321,
"halflife": 60
},
"recall_caps_throttle": {
"value": 0,
"halflife": 1.5
},
"recall_caps_throttle2o": {
"value": 0,
"halflife": 0.5
},
"session_cache_liveness": {
"value": 22304.741127381614,
"halflife": 300
},
"cap_acquisition": {
"value": 0,
"halflife": 10
},
"delegated_inos": [
{
"start": "0x10000000455",
"length": 500
}
],
"inst": "client.24846 v1:192.168.124.168:0/3813501997",
"completed_requests": [],
"prealloc_inos": [
{
"start": "0x10000000455",
"length": 500
},
{
"start": "0x100000151d4",
"length": 809
}
],
"client_metadata": {
"client_features": {
"feature_bits": "0x0000000000003bff"
},
"metric_spec": {
"metric_flags": {
"feature_bits": "0x"
}
},
"entity_id": "nova",
"hostname": "host-192-168-124-168",
"kernel_version": "5.3.18-lp152.57-default",
"root": "/client"
}
},
{
"id": 24835,
"entity": {
"name": {
"type": "client",
"num": 24835
},
"addr": {
"type": "v1",
"addr": "192.168.124.35:0",
"nonce": 3089325989
}
},
"state": "open",
"num_leases": 0,
"num_caps": 1,
"request_load_avg": 0,
"uptime": 68774.271999999997,
"requests_in_flight": 0,
"num_completed_requests": 0,
"num_completed_flushes": 0,
"reconnecting": false,
"recall_caps": {
"value": 0,
"halflife": 60
},
"release_caps": {
"value": 0,
"halflife": 60
},
"recall_caps_throttle": {
"value": 0,
"halflife": 1.5
},
"recall_caps_throttle2o": {
"value": 0,
"halflife": 0.5
},
"session_cache_liveness": {
"value": 0,
"halflife": 300
},
"cap_acquisition": {
"value": 0,
"halflife": 10
},
"delegated_inos": [],
"inst": "client.24835 v1:192.168.124.35:0/3089325989",
"completed_requests": [],
"prealloc_inos": [
{
"start": "0x10000000005",
"length": 499
},
{
"start": "0x1000000025e",
"length": 501
}
],
"client_metadata": {
"client_features": {
"feature_bits": "0x0000000000003bff"
},
"metric_spec": {
"metric_flags": {
"feature_bits": "0x"
}
},
"entity_id": "admin",
"hostname": "pacific",
"kernel_version": "5.3.18-lp152.81-default",
"root": "/"
}
}
]
---snip---
Zitat von Venky Shankar <vshankar@xxxxxxxxxx>:
On Thu, Jul 15, 2021 at 5:18 PM Eugen Block <eblock@xxxxxx> wrote:
Hi,
I just setup a virtual one-node cluster (16.2.5) to check out
cephfs-top. Regarding the number of clients I was a little surprised,
too, in the first couple of minutes the number switched back and forth
between 0 and 1 although I had not connected any client yet. But after
a while the number got stable and correct. I have two clients
connected now, but I don't see any stats despite having the stats
module enabled:
The "(dis)appearing" client is the libcephfs instance in mgr/volumes.
When mgr/volumes cleans up its connection, you would see the client
count drop (and increase when it starts instantiating connections).
---snip---
cephfs-top - Thu Jul 15 13:35:41 2021
Client(s): 2 - 0 FUSE, 0 kclient, 2 libcephfs
client_id mount_root chit(%) rlat(s) wlat(s) mlat(s) dlease(%)
ofiles oicaps oinodes mount_point@host/addr
24835 / N/A N/A N/A N/A N/A N/A
N/A N/A N/A@pacific/v1:192.168.124.35
24846 /client N/A N/A N/A N/A N/A N/A
N/A N/A N/A@host-192-168-124-168/v1:192.168.124.168
---snip---
The command 'ceph fs perf stats' also only shows this:
---snip---
pacific:~ # ceph fs perf stats
{"version": 1, "global_counters": ["cap_hit", "read_latency",
"write_latency", "metadata_latency", "dentry_lease", "opened_files",
"pinned_icaps", "opened_inodes"], "counters": [], "client_metadata":
{"client.24835": {"IP": "v1:192.168.124.35", "hostname": "pacific",
"root": "/", "mount_point": "N/A"}, "client.24846": {"IP":
"v1:192.168.124.168", "hostname": "host-192-168-124-168", "root":
"/client", "mount_point": "N/A"}}, "global_metrics": {"client.24835":
[[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0]],
"client.24846": [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0, 0], [0,
0], [0, 0]]}, "metrics": {"delayed_ranks": [], "mds.0":
{"client.24835": [], "client.24846": []}}}
---snip---
The bunch of "N/A"s are due to the client metadata not having entries
for which metrics are valid (sent) by the client. Normally, you should
see something like::
{
"version": 1,
"global_counters": [
"cap_hit",
"read_latency",
"write_latency",
"metadata_latency",
"dentry_lease",
"opened_files",
"pinned_icaps",
"opened_inodes"
],
"counters": [],
"client_metadata": {
"client.624141": {
"IP": "X.X.X.X",
"hostname": "host1",
"root": "/",
"mount_point": "/mnt/cephfs",
"valid_metrics": [
"cap_hit",
"read_latency",
"write_latency",
"metadata_latency",
"dentry_lease",
"opened_files",
"pinned_icaps",
"opened_inodes"
]
},
What does "ceph tell mds.<rank0 id> client ls" dump?
And, as Jos mentioned, it takes a couple of seconds for the stats to
show up when run afresh.
although I have written a couple of GB into the cephfs.
Regards,
Eugen
Zitat von Erwin Bogaard <erwin.bogaard@xxxxxxxxx>:
> Hi,
>
> I just upgraded our cluster to pacific 16.2.5.
> As I'm curious what cephfs-top could give for insights, I followed the
> steps in the documentation.
> After enabling the mgr module "stats":
>
> # ceph mgr module ls
> ...
> "enabled_modules": [
> "dashboard",
> "iostat",
> "restful",
> "stats",
> "zabbix"
> ...
>
> I tried the following command:
> # ceph fs perf stats
> {"version": 1, "global_counters": ["cap_hit", "read_latency",
> "write_latency", "metadata_latency", "dentry_lease", "opened_files",
> "pinned_icaps", "opened_inodes"], "counters": [], "client_metadata": {},
> "global_metrics": {}, "metrics": {"delayed_ranks": []}}
>
> As you can see, this returns no info whatsoever. The same with:
>
> # cephfs-top
> cluster ceph does not exist
>
> The actual cluster name is "ceph".
>
> So I don't understand why "ceph fs perf stats" isn't showing any
> information.
> Maybe another indicator something isn't ritght:
>
> # ceph fs status
> cephfs - 0 clients
> ======
> RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
> ...
>
> I see "0 clients". When I take a look in the mgr dashboard, I can actually
> see all clients. Which are RHEL 7 & 8 cephfs kernel clients.
> There is only 1 mds active, and 1 in standby-replay.
> I have multiple pools active, but only 1 fs.
>
> Does anyone have a suggestion where I can take a look enable gathering the
> stats?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Cheers,
Venky
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx