ceph-volume failed after replacing disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

We replaced a faulty disk out of N OSD and tried to follow steps according to “Replacing and OSD” in http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/, but got error:

 

# ceph osd destroy 71--yes-i-really-mean-it              

# ceph-volume lvm create --bluestore --data /dev/data/lv01 --osd-id 71 --block.db /dev/db/lv01

Running command: /bin/ceph-authtool --gen-print-key

Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json

-->  RuntimeError: The osd ID 71 is already in use or does not exist.

 

ceph –s still shows  N OSDS.   I then remove with “ceph osd rm 71”.   Now “ceph –s” shows N-1 OSDS and id 71 doesn’t appear in “ceph osd ls”.

 

However, repeating the ceph-volume command still gets same error.     

We’re running CEPH 14.2.1.   I must have some steps missed.    Would anyone please help?     Thanks a lot.

 

Rgds,

/stwong

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux