Re: ceph-volume failed after replacing disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for all your help.

 

I’m just curious if I can re-use the same ID after disk crush since it seems I can do that according to the manual.    It’s totally okay to use other ID J

Finally recreated the OSD without specifying OSD ID – it takes ID 71 again.

 

Thanks again.

Best Rgds,

/st Wong

 

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Erik McCormick
Sent: Friday, July 5, 2019 9:41 PM
To: Paul Emmerich <paul.emmerich@xxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] ceph-volume failed after replacing disk

 

If you create the OSD without specifying an ID it will grab the lowest available one. Unless you have other gaps somewhere, that ID would probably be the one you just removed.

 

-Erik

 

On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:

 

On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:

On Fri, Jul 5, 2019 at 6:23 AM ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx> wrote:
>
> Hi,
>
>
>
> I target to run just destroy and re-use the ID as stated in manual but seems not working.
>
> Seems I’m unable to re-use the ID ?

The OSD replacement guide does not mention anything about crush and
auth commands. I believe you are now in a situation where the ID is no
longer able to be re-used, and ceph-volume
will not create one for you when specifying it in the CLI.

I don't know why there is so much attachment to these ID numbers, why
is it desirable to have that 71 number back again?

 

it avoids unnecessary rebalances

 

>
>
>
> Thanks.
>
> /stwong
>
>
>
>
>
> From: Paul Emmerich <paul.emmerich@xxxxxxxx>
> Sent: Friday, July 5, 2019 5:54 PM
> To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx>
> Cc: Eugen Block <eblock@xxxxxx>; ceph-users@xxxxxxxxxxxxxx
> Subject: Re: [ceph-users] ceph-volume failed after replacing disk
>
>
>
>
>
> On Fri, Jul 5, 2019 at 11:25 AM ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx> wrote:
>
> Hi,
>
> Yes, I run the commands before:
>
> # ceph osd crush remove osd.71
> device 'osd.71' does not appear in the crush map
> # ceph auth del osd.71
> entity osd.71 does not exist
>
>
>
> which is probably the reason why you couldn't recycle the OSD ID.
>
>
>
> Either run just destroy and re-use the ID or run purge and not re-use the ID.
>
> Manually deleting auth and crush entries is no longer needed since purge was introduced.
>
>
>
>
>
> Paul
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
>
>
> Thanks.
> /stwong
>
> -----Original Message-----
> From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Eugen Block
> Sent: Friday, July 5, 2019 4:54 PM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re: [ceph-users] ceph-volume failed after replacing disk
>
> Hi,
>
> did you also remove that OSD from crush and also from auth before recreating it?
>
> ceph osd crush remove osd.71
> ceph auth del osd.71
>
> Regards,
> Eugen
>
>
> Zitat von "ST Wong (ITSC)" <ST@xxxxxxxxxxxxxxxx>:
>
> > Hi all,
> >
> > We replaced a faulty disk out of N OSD and tried to follow steps
> > according to "Replacing and OSD" in
> > http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/,
> > but got error:
> >
> > # ceph osd destroy 71--yes-i-really-mean-it # ceph-volume lvm create
> > --bluestore --data /dev/data/lv01 --osd-id
> > 71 --block.db /dev/db/lv01
> > Running command: /bin/ceph-authtool --gen-print-key Running command:
> > /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
> > /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
> > -->  RuntimeError: The osd ID 71 is already in use or does not exist.
> >
> > ceph -s still shows  N OSDS.   I then remove with "ceph osd rm 71".
> >  Now "ceph -s" shows N-1 OSDS and id 71 doesn't appear in "ceph osd
> > ls".
> >
> > However, repeating the ceph-volume command still gets same error.
> > We're running CEPH 14.2.1.   I must have some steps missed.    Would
> > anyone please help?     Thanks a lot.
> >
> > Rgds,
> > /stwong
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux