Quincy + CephAdm, Zeroing weights of OSDs in crushmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I recently moved to quincy and cephadm.
I noticed that when I moved some drives from one machine to another they at
some point got marked as weight 0 in the crushmap.
The first time that was fine, I just fixed it and figured it was something
I did wrong with moving the drives.

The second time it has just happened again, a significant number of osd's
on the newest host in the cluster got marked 0 in the crushmap after my
first ceph orch upgrade.
# From 17.2.1:
ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.3


Why would ceph / cephadm change crushmap weights of hdd osds?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux