cephadm found duplicate OSD, how to resolve?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm in the process of re-provisioning OSDs on a test cluster with cephadm. One of the OSD id's that was supposedly previously living on host3 is now alive on host2. And cephadm is not happy about that:

"Found duplicate OSDs: osd.3 in status running on host2, osd.3 in status stopped on host3."

As far as I can see there has never been an osd 3 on host3 managed by cephadm (there used to be an osd.3 when this cluster was ceph-ansible managed).

How can I find what cephadm's view of the world is? And tell cephadm it should forget about osd.3 on host3, without destroying osd.3 on host2?

Thanks,

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux