Re: stray daemons not managed by cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



if you do a mgr failover ("ceph mgr fail") and wait a few minutes do the
issues clear out? I know there's a bug where removed mons get marked as
stray daemons while downsizing by multiple mons at once (cephadm might be
removing them too quickly, not totally sure of the cause) but doing a mgr
failover has always cleared the stray daemon notifications for me. For some
context, what it's listing as stray daemons are roughly what is being
reported in "ceph node ls" that doesn't show up in "ceph orch ps". The idea
being the orch ps output shows all the daemons cephadm is aware of and
managing while "ceph node ls" are ceph daemons the cluster, but not
necessarily cephadm itself, is aware of. For me, the mon daemons marked
stray were still showing up in that "ceph node ls" output, but doing a mgr
failover would clean that up and then the stray daemon warnings would also
disappear.

On Mon, Jun 12, 2023 at 8:54 AM farhad kh <farhad.khedriyan@xxxxxxxxx>
wrote:

>  i deployed the ceph cluster with 8 node (v17.2.6) and  after add all of
> hosts, ceph create 5 mon daemon instances
> i try decrease that to 3 instance with ` ceph orch apply mon
> --placement=label:mon,count:3 it worked, but after that i get error "2
> stray daemons not managed by cephadm" .
> But every time I tried to deploy and delete other instances, this number
> increased Now I have 7 daemon that are not managed by cephadm
> How to deal with this issue?
>
> ------------------------
> [root@opcsdfpsbpp0201 ~]# ceph -s
>   cluster:
>     id:     79a2627c-0821-11ee-a494-00505695c58c
>     health: HEALTH_WARN
>             16 stray daemon(s) not managed by cephadm
>
>   services:
>     mon: 3 daemons, quorum opcsdfpsbpp0201,opcsdfpsbpp0205,opcsdfpsbpp0203
> (age 2m)
>     mgr: opcsdfpsbpp0201.vttwxa(active, since 27h), standbys:
> opcsdfpsbpp0207.kzxepm
>     mds: 1/1 daemons up, 2 standby
>     osd: 74 osds: 74 up (since 26h), 74 in (since 26h)
>
>   data:
>     volumes: 1/1 healthy
>     pools:   6 pools, 6 pgs
>     objects: 2.10k objects, 8.1 GiB
>     usage:   28 GiB used, 148 TiB / 148 TiB avail
>     pgs:     6 active+clean
>
>   io:
>     client:   426 B/s rd, 0 op/s rd, 0 op/s wr
>
> [root@opcsdfpsbpp0201 ~]# ceph health detail
> HEALTH_WARN 16 stray daemon(s) not managed by cephadm
> [WRN] CEPHADM_STRAY_DAEMON: 16 stray daemon(s) not managed by cephadm
>     stray daemon mon.opcsdfpsbpp0207 on host opcsdfpsbpp0203 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0203 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0211 on host opcsdfpsbpp0203 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0203 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0207 on host opcsdfpsbpp0205 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0205 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0211 on host opcsdfpsbpp0205 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0205 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0207 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0207 on host opcsdfpsbpp0209 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0209 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0211 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0215 on host opcsdfpsbpp0211 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0211 on host opcsdfpsbpp0213 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0215 not managed by
> cephadm
>     stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0215 not managed by
> cephadm
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux