Re: Is it possible to assign osd id numbers?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Now that’s a *very* different question from numbers assigned during an install.

With recent releases instead of going down the full removal litany listed below, you can instead down/out the OSD and `destroy` it.  That preserves the CRUSH bucket and OSD ID, then when you use ceph-disk, ceph-volume, what-have-you to deploy a replacement, you can specify the same and desired OSD ID on the commandline.

Note that as of 12.2.2, you’ll want to record and re-set any override reweight (manual or reweight-by-utilization) as that usually or never survives.  Also note that again as of that release, if the replacement drive is a different size, the CRUSH weight is not adjusted, so you may (or may not) want to adjust the CRUSH weight.  Slight differences aren’t usually a huge deal; big differences can mean you have unused capacity, or overloaded drives.

— Anthony

> 
> Thank you for your answer below.
> 
> I'm not looking to reuse them as much as I am trying to control what unused number is actually used.
> 
> For example if I have 20 osds and 2 have failed...when I replace a disk in one server I don't want it to automatically use the next lowest number for the osd assignment.
> 
> I understand what you mean about not focusing on the osd ids...but my ocd is making me ask the question.
> 
> Thanks,
> Shain
> 
> On 9/11/20, 9:45 AM, "George Shuklin" <george.shuklin@xxxxxxxxx> wrote:
> 
>    On 11/09/2020 16:11, Shain Miley wrote:
>> Hello,
>> I have been wondering for quite some time whether or not it is possible to influence the osd.id numbers that are  assigned during an install.
>> 
>> I have made an attempt to keep our osds in order over the last few years, but it is a losing battle without having some control over the osd assignment.
>> 
>> I am currently using ceph-deploy to handle adding nodes to the cluster.
>> 
>    You can reuse osd numbers, but I strongly advice you not to focus on 
>    precise IDs. The reason is that you can have such combination of server 
>    faults, which will swap IDs no matter what.
> 
>    It's a false sense of beauty to have 'ID of OSD match ID in the name of 
>    the server'.
> 
>    How to reuse osd nums?
> 
>    OSD number is used (and should be cleaned if OSD dies) in three places 
>    in Ceph:
> 
>    1) Crush map: ceph osd crush rm osd.x
> 
>    2) osd list: ceph osd rm osd.x
> 
>    3) auth: ceph auth rm osd.x
> 
>    The last one is often forgoten and is a usual reason for ceph-ansible to 
>    fail on new disk in the server.
>    _______________________________________________
>    ceph-users mailing list -- ceph-users@xxxxxxx
>    To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux