Re: Influencing the osd.id when creating or replacing an osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are running octopus but will be upgrading to reef or squid in the next few weeks.  As part of that upgrade I am planning on switching over to using cephadm as well.

Part of what I am doing right now is going through and replacing old drives and removing some of our oldest nodes and replacing them with new ones…then I will convert the rest of the filestore osd over to bluestore so that I can upgrade.


One other question based on your suggestion below…my typical process of removing or replacing an osd involves the following:

ceph osd crush reweight osd.id 0.0

ceph osd out osd.id
service ceph stop osd.id
ceph osd crush remove osd.id
ceph auth del osd.id

ceph osd rm id



Does `ceph osd destroy` do something other than the last 3 commands above or am I just doing the same thing using multiple commands?  If I need to start issuing the destroy command as well I can.



Thank you.

Shain



From: Anthony D'Atri <aad@xxxxxxxxxxxxxx>
Date: Friday, October 18, 2024 at 9:01 AM
To: Shain Miley <SMiley@xxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  Influencing the osd.id when creating or replacing an osd
!-------------------------------------------------------------------|
  External Email - Use Caution

|-------------------------------------------------------------------!

What release are you running where ceph-deploy still works?

I get what you're saying, but really you should get used to OSD IDs being arbitrary.

    - ``ceph osd ls-tree <name>`` will output a list of OSD ids under
      the given CRUSH name (like a host or rack name).  This is useful
      for applying changes to entire subtrees.  For example, ``ceph
      osd down `ceph osd ls-tree rack1```.

This is useful for one-off scripts, where you can e.g. use it to get a list of OSDs on a given host.

Normally the OSD ID selected is the lowest-numbered unused one.  Which can either be an ID that has never been used before, or one that has been deleted.  So if you delete an OSD entirely and redeploy, you may or may not get the same ID depending on the cluster’s history.

    - ``ceph osd destroy`` will mark an OSD destroyed and remove its
      cephx and lockbox keys.  However, the OSD id and CRUSH map entry
      will remain in place, allowing the id to be reused by a
      replacement device with minimal data rebalancing.

Destroying OSDs and redeploying them can help with what you’re after.

> On Oct 17, 2024, at 9:14 PM, Shain Miley <SMiley@xxxxxxx> wrote:
>
> Hello,
> I am still using ceph-deploy to add osd’s to my cluster.  From what I have read ceph-deploy does not allow you to specify the osd.id when creating new osds, however I am wondering if there is a way to influence the number that ceph will assign for the next osd that is created.
>
> I know that it really shouldn’t matter what osd number gets assigned to the disk but as the number of osd increases it is much easier to keep track of where things are if you can control the id when replacing failed disks or adding new nodes.
>
> Thank you,
> Shain
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux