Hi,
this works for me in multiple virtual test clusters across different
Ceph versions, including 19.2.0. Both within a cephadm shell as well
as outside of it. Maybe do a 'ceph mgr fail' and retry?
Zitat von Marcus <marcus@xxxxxxxxxx>:
Hi all,
We are running a ceph cluster with filesystem that contains 5 servers.
Ceph version: 19.2.0 squid
If I run: ceph osd status when all hosts are online and in the output
is the way it should and it prints status for all osds. If just a couple
of osds are down status is printed and specific osds are stated as down.
One of the servers went down and we ended up with a health warning.
If I run: ceph osd stat
I get the information that 64 out of 80 osd is in.
If I try to run: ceph osd status
I get an python error:
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1864, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/ceph/mgr/mgr_module.py", line 499, in call
return self.func(mgr, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/ceph/mgr/status/module.py", line 337, in handle_osd_status
assert metadata
AssertionError
I suppose this is some type of bug when one host is down?
Thanks!
Marcus
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx