+1 for this issue, i've managed to reproduce it on my test cluster. Kind regards, Nino Kotur On Mon, Jun 12, 2023 at 2:54 PM farhad kh <farhad.khedriyan@xxxxxxxxx> wrote: > i deployed the ceph cluster with 8 node (v17.2.6) and after add all of > hosts, ceph create 5 mon daemon instances > i try decrease that to 3 instance with ` ceph orch apply mon > --placement=label:mon,count:3 it worked, but after that i get error "2 > stray daemons not managed by cephadm" . > But every time I tried to deploy and delete other instances, this number > increased Now I have 7 daemon that are not managed by cephadm > How to deal with this issue? > > ------------------------ > [root@opcsdfpsbpp0201 ~]# ceph -s > cluster: > id: 79a2627c-0821-11ee-a494-00505695c58c > health: HEALTH_WARN > 16 stray daemon(s) not managed by cephadm > > services: > mon: 3 daemons, quorum opcsdfpsbpp0201,opcsdfpsbpp0205,opcsdfpsbpp0203 > (age 2m) > mgr: opcsdfpsbpp0201.vttwxa(active, since 27h), standbys: > opcsdfpsbpp0207.kzxepm > mds: 1/1 daemons up, 2 standby > osd: 74 osds: 74 up (since 26h), 74 in (since 26h) > > data: > volumes: 1/1 healthy > pools: 6 pools, 6 pgs > objects: 2.10k objects, 8.1 GiB > usage: 28 GiB used, 148 TiB / 148 TiB avail > pgs: 6 active+clean > > io: > client: 426 B/s rd, 0 op/s rd, 0 op/s wr > > [root@opcsdfpsbpp0201 ~]# ceph health detail > HEALTH_WARN 16 stray daemon(s) not managed by cephadm > [WRN] CEPHADM_STRAY_DAEMON: 16 stray daemon(s) not managed by cephadm > stray daemon mon.opcsdfpsbpp0207 on host opcsdfpsbpp0203 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0203 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0211 on host opcsdfpsbpp0203 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0203 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0207 on host opcsdfpsbpp0205 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0205 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0211 on host opcsdfpsbpp0205 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0205 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0207 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0207 on host opcsdfpsbpp0209 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0209 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0211 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0215 on host opcsdfpsbpp0211 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0211 on host opcsdfpsbpp0213 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0209 on host opcsdfpsbpp0215 not managed by > cephadm > stray daemon mon.opcsdfpsbpp0213 on host opcsdfpsbpp0215 not managed by > cephadm > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx