Non existing host in maintenance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have removed a host (hvs004) that was in maintenance.

The system disk of this host had failed, so removed the host hvs004 in ceph; replaced the system disk; erased all the osd-disks and reinstalled the host as hvs005.

Resulting a cluster status in waring that doesn’t goes away:
health: HEALTH_WARN
            1 host is in maintenance mode

Removal is done by  “ceph orch host rm hvs004 --offline –force” in cephadm shell.

How can I correct this false warning?

Some more info:

root@hvs001:/# ceph orch host ls
HOST    ADDR       LABELS  STATUS
hvs001  xxx.xxx.xxx.xxx  _admin
hvs002  xxx.xxx.xxx.xxx  _admin
hvs003  xxx.xxx.xxx.xxx  _admin
hvs005  xxx.xxx.xxx.xxx  _admin
4 hosts in cluster

root@hvs001:/# ceph health detail
HEALTH_WARN 1 host is in maintenance mode
[WRN] HOST_IN_MAINTENANCE: 1 host is in maintenance mode
    hvs004 is in maintenance        :-/

Help is greatly apreciated…
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux