Re: cephadm stuck in deleting state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

do you see the daemon on that iscsi host(s) with 'cephadm ls'? If the answer is yes, you could remove it with cephadm, too:

cephadm rm-daemon --name iscsi.iscsi

Does that help?


Zitat von Fyodor Ustinov <ufm@xxxxxx>:

Hi!

I have fresh installed pacific

root@s-26-9-19-mon-m1:~# ceph version
ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)

I managed to bring him to this state:

root@s-26-9-19-mon-m1:~# ceph health detail
HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'iscsi-gw-1' does not exist retval: -2 [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'iscsi-gw-1' does not exist retval: -2 Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'iscsi-gw-1' does not exist retval: -2


root@s-26-9-19-mon-m1:~# ceph orch ls
NAME           PORTS        RUNNING  REFRESHED   AGE  PLACEMENT
alertmanager   ?:9093,9094      1/1  14m ago     9d   count:1;label:mon
crash                         12/12  14m ago     11d  *
grafana        ?:3000           1/1  14m ago     9d   count:1;label:mon
iscsi.iscsi                     0/0  <deleting>  7h   iscsi-gw-1;iscsi-gw-2
mgr                             2/2  14m ago     9d   count:2;label:mon
mon                             3/3  14m ago     5d   count:3
node-exporter  ?:9100         12/12  14m ago     11d  *
osd                           54/54  14m ago     -    <unmanaged>
prometheus     ?:9095           1/1  14m ago     5d   count:1;label:mon

root@s-26-9-19-mon-m1:~# ceph orch host ls
HOST              ADDR          LABELS      STATUS
s-26-9-17         10.5.107.104  _admin
s-26-9-18         10.5.107.105  _admin
s-26-9-19-mon-m1  10.5.107.101  mon _admin
s-26-9-20         10.5.107.106  _admin
s-26-9-21         10.5.107.107  _admin
s-26-9-22         10.5.107.110  _admin
s-26-9-23         10.5.107.108  _admin
s-26-9-24-mon-m2  10.5.107.102  _admin mon
s-26-9-25         10.5.107.111  _admin
s-26-9-26         10.5.107.109  _admin
s-26-9-27         10.5.107.112  _admin
s-26-9-28-mon-m3  10.5.107.103  _admin mon


How can we get the cluster out of this state now?

WBR,
    Fyodor.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux