Thanks - I didn't realize that was such a recent fix.
I've now tried 12.2.8, and perhaps I'm not clear on what I should have
done to the OSD that I'm replacing, since I'm getting the error "The osd
ID 747 is already in use or does not exist.". The case is clearly the
latter, since I've completely removed the old OSD (osd crush remove,
auth del, osd rm, wipe disk). Should I have done something different
(i.e. not remove the OSD completely)?
Searching the docs I see a command 'ceph osd destroy'. What does that
do (compared to my removal procedure, osd crush remove, auth del, osd rm)?
Thanks,
Andras
On 10/3/18 10:36 AM, Alfredo Deza wrote:
On Wed, Oct 3, 2018 at 9:57 AM Andras Pataki
<apataki@xxxxxxxxxxxxxxxxxxxxx> wrote:
After replacing failing drive I'd like to recreate the OSD with the same
osd-id using ceph-volume (now that we've moved to ceph-volume from
ceph-disk). However, I seem to not be successful. The command I'm using:
ceph-volume lvm prepare --bluestore --osd-id 747 --data H901D44/H901D44
--block.db /dev/disk/by-partlabel/H901J44
But it created an OSD the ID 601, which was the lowest it could allocate
and ignored the 747 apparently. This is with ceph 12.2.7. Any ideas?
Yeah, this was a problem that was fixed and released as part of 12.2.8
The tracker issue is: http://tracker.ceph.com/issues/24044
The Luminous PR is https://github.com/ceph/ceph/pull/23102
Sorry for the trouble!
Andras
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com