'ceph fs status' no longer works?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

For a while now I've been using 'ceph fs status' to show current MDS active servers, filesystem status, etc. I recently took down my MDS servers and added RAM to them (one by one, so the filesystem stayed online). After doing that with my four MDS servers (I had two active and two standby), all looks OK, 'ceph -s' reports HEALTH_OK. But when I do 'ceph fs status' now, I get this:

# ceph fs status
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1811, in _handle_command
    return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 474, in call
    return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/status/module.py", line 109, in handle_fs_status
    assert metadata
AssertionError

This is on ceph 18.2.1 reef. This is very odd - can anyone think of a reason why 'ceph fs status' would stop working after taking each of the servers down for maintenance?

The filesystem is online and working just fine however. This ceph instance is deployed via the cephadm method on RHEL 9.3, so the everything is containerized in podman.

Thanks again,
erich
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux