Re: Influencing the osd.id when creating or replacing an osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

after you have reqeighted the osd to 0 and waited for the rebalancing to finish, you can just stop the osd process and "purge" the osd instead of marking it out (since data reshuffling already happened).

The "purge" command does a couple of things at once, like removing it from the crush tree and deleting auth caps. This if from the command help output (ceph osd purge -h):

osd purge <id|osd.id> [--force] [--yes-i-really-mean-it] --> purge all osd data from the monitors including the OSD id and CRUSH position

So your procedure could be reduced to three steps if you don't need to retain the osd id:

1. ceph osd crush reweight osd.<ID> 0
2. stop osd
3. ceph osd purge <ID> [--force] [--yes-i-really-mean-it]

You can also add one more safety switch before purging it:

ceph osd safe-to-destroy <ID>

The "destroy" commmand leaves the ID, auth caps and crush weight intact in case you want to replace the OSD with a drive of the same size. In that case you would not do the "crush reweight" but just stop the osd and set it "out", wait for the recovery to finish, then mark it as "destroyed" and then recreate the osd with a new drive.

Regards,
Eugen

Zitat von Anthony D'Atri <aad@xxxxxxxxxxxxxx>:

On Oct 19, 2024, at 2:47 PM, Shain Miley <SMiley@xxxxxxx> wrote:

We are running octopus but will be upgrading to reef or squid in the next few weeks. As part of that upgrade I am planning on switching over to using cephadm as well.

Part of what I am doing right now is going through and replacing old drives and removing some of our oldest nodes and replacing them with new ones…then I will convert the rest of the filestore osd over to bluestore so that I can upgrade.

One other question based on your suggestion below…my typical process of removing or replacing an osd involves the following:

ceph osd crush reweight osd.id <http://osd.id/> 0.0
ceph osd out osd.id <http://osd.id/>
service ceph stop osd.id <http://osd.id/>
ceph osd crush remove osd.id <http://osd.id/>
ceph auth del osd.id <http://osd.id/>
ceph osd rm id

Does `ceph osd destroy` do something other than the last 3 commands above or am I just doing the same thing using multiple commands? If I need to start issuing the destroy command as well I can.


I don’t recall if it will stop the service if running, but it does leave the OSD in the CRUSH map marked as ‘destroyed’. I *think* it leaves the auth but I’m not sure.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux