Hi,
the noin flag seems to be only applicable to existing OSDs which are
already in the crushmap. It doesn't apply to newly created OSDs, I
could confirm that in a small test cluster with Pacific and Reef. I
don't have any insights if that is by design or not, I assume it's
supposed to work like that.
If you want to prevent data movement when creating new OSDs you could
use the osd_crush_initial_weight config option and set it to 0. We
have that in our cluster as well, but of course you'd have to reweight
new OSDs manually.
Regards,
Eugen
Zitat von Zakhar Kirpichenko <zakhar@xxxxxxxxx>:
Any comments regarding `osd noin`, please?
/Z
On Tue, 2 Apr 2024 at 16:09, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote:
Hi,
I'm adding a few OSDs to an existing cluster, the cluster is running with
`osd noout,noin`:
cluster:
id: 3f50555a-ae2a-11eb-a2fc-ffde44714d86
health: HEALTH_WARN
noout,noin flag(s) set
Specifically `noin` is documented as "prevents booting OSDs from being
marked in". But freshly added OSDs were immediately marked `up` and `in`:
services:
...
osd: 96 osds: 96 up (since 5m), 96 in (since 6m); 338 remapped pgs
flags noout,noin
# ceph osd tree in | grep -E "osd.11|osd.12|osd.26"
11 hdd 9.38680 osd.11 up 1.00000 1.00000
12 hdd 9.38680 osd.12 up 1.00000 1.00000
26 hdd 9.38680 osd.26 up 1.00000 1.00000
Is this expected behavior? Do I misunderstand the purpose of the `noin`
option?
Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx