Re: ceph-volume: recreate OSD with same ID after drive replacement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thats strange, I recall only deleting the OSD from the crushmap, authm then osd rm..


On Wed, Oct 3, 2018 at 2:54 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
On Wed, Oct 3, 2018 at 3:52 PM Andras Pataki
<apataki@xxxxxxxxxxxxxxxxxxxxx> wrote:
>
> Ok, understood (for next time).
>
> But just as an update/closure to my investigation - it seems this is a
> feature of ceph-volume (that it can't just create an OSD from scratch
> with a given ID), not of base ceph.  The underlying ceph command (ceph
> osd new) very happily accepts an osd-id as an extra optional argument
> (after the fsid), and creates and osd with the given ID.  In fact, a
> quick change to ceph_volume (create_id function in prepare.py) will make
> ceph-volume recreate the OSD with a given ID.  I'm not a ceph-volume
> expert, but a feature to create an OSD with a given ID from scratch
> would be nice (given that the underlying raw ceph commands already
> support it).

That is something that I wasn't aware of, thanks for bringing it up.
I've created an issue on the tracker to accommodate for that behavior:

http://tracker.ceph.com/issues/36307

>
> Andras
>
> On 10/3/18 11:41 AM, Alfredo Deza wrote:
> > On Wed, Oct 3, 2018 at 11:23 AM Andras Pataki
> > <apataki@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >> Thanks - I didn't realize that was such a recent fix.
> >>
> >> I've now tried 12.2.8, and perhaps I'm not clear on what I should have
> >> done to the OSD that I'm replacing, since I'm getting the error "The osd
> >> ID 747 is already in use or does not exist.".  The case is clearly the
> >> latter, since I've completely removed the old OSD (osd crush remove,
> >> auth del, osd rm, wipe disk).  Should I have done something different
> >> (i.e. not remove the OSD completely)?
> > Yeah, you completely removed it so now it can't be re-used. This is
> > the proper way if wanting to re-use the ID:
> >
> > http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#rados-replacing-an-osd
> >
> > Basically:
> >
> >      ceph osd destroy {id} --yes-i-really-mean-it
> >
> >> Searching the docs I see a command 'ceph osd destroy'.  What does that
> >> do (compared to my removal procedure, osd crush remove, auth del, osd rm)?
> >>
> >> Thanks,
> >>
> >> Andras
> >>
> >>
> >> On 10/3/18 10:36 AM, Alfredo Deza wrote:
> >>> On Wed, Oct 3, 2018 at 9:57 AM Andras Pataki
> >>> <apataki@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >>>> After replacing failing drive I'd like to recreate the OSD with the same
> >>>> osd-id using ceph-volume (now that we've moved to ceph-volume from
> >>>> ceph-disk).  However, I seem to not be successful.  The command I'm using:
> >>>>
> >>>> ceph-volume lvm prepare --bluestore --osd-id 747 --data H901D44/H901D44
> >>>> --block.db /dev/disk/by-partlabel/H901J44
> >>>>
> >>>> But it created an OSD the ID 601, which was the lowest it could allocate
> >>>> and ignored the 747 apparently.  This is with ceph 12.2.7. Any ideas?
> >>> Yeah, this was a problem that was fixed and released as part of 12.2.8
> >>>
> >>> The tracker issue is: http://tracker.ceph.com/issues/24044
> >>>
> >>> The Luminous PR is https://github.com/ceph/ceph/pull/23102
> >>>
> >>> Sorry for the trouble!
> >>>> Andras
> >>>>
> >>>> _______________________________________________
> >>>> ceph-users mailing list
> >>>> ceph-users@xxxxxxxxxxxxxx
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux