Gents,
My cluster had a gap in the OSD sequence numbers at certain point. Basically, because of missing osd auth del/rm" in a previous disk replacement task for osd.17, a new osd.34 was created. It did not really bother me until recently when I tried to replace all smaller disks to bigger disks.
Ceph seems also pick up the next available osd sequence number. When I replace osd.18, the disk came up online as osd.17. When I am doing osd.19, it became osd.18. It generated more backfull_wait pgs than sticking to the original osd number.
Using ceph-deploy in version 10.2.3, is there a way to specify osd id when doing osd activate?
Thank you.
Jin.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com