Hi Eugen That works. Apart from the release notes, there’s also documentation that has this wrong: https://docs.ceph.com/en/latest/rados/operations/monitoring/#network-performance-checks <https://docs.ceph.com/en/latest/rados/operations/monitoring/#network-performance-checks> Thank you! Denis > On 12 Nov 2020, at 08:15, Eugen Block <eblock@xxxxxx> wrote: > > Hi, > > although the Nautilus v14.2.5 release notes [1] state that this command is available for both mgr and osd it doesn't seem to apply to mgr. But you should be able to run it for an osd daemon. > > Regards, > Eugen > > > [1] https://docs.ceph.com/en/latest/releases/nautilus/ > > > Zitat von Denis Krienbühl <denis@xxxxxxx>: > >> Hi >> >> We’ve recently encountered the following errors: >> >> [WRN] OSD_SLOW_PING_TIME_BACK: Slow OSD heartbeats on back (longest 2752.832ms) >> Slow OSD heartbeats on back from osd.2 [nvme-a] to osd.290 [nvme-c] 2752.832 msec >> ... >> Truncated long network list. Use ceph daemon mgr.# dump_osd_network for more information >> >> To get more information we wanted to run the dump_osd_network command, but it doesn’t seem to be a valid command: >> >> ceph daemon /var/run/ceph/ceph-mgr.$(hostname).asok dump_osd_network 0 >> >> no valid command found; 10 closest matches: >> 0 >> 1 >> 2 >> abort >> assert >> config diff >> config diff get <var> >> config get <var> >> config help [<var>] >> config set <var> <val>... >> admin_socket: invalid command >> >> Other commands, like ceph daemon dump_cache work, so it seems to hit the right socket. >> >> What am I doing wrong? >> >> Cheers, >> >> Denis >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx