I've done something similar. I used a process like this:
ceph osd set noout
ceph osd set nodown
ceph osd set nobackfill
ceph osd set norebalance
ceph osd set norecover
Then I did my work to manually remove/destroy the OSDs I was replacing, brought the replacements online, and unset all of those options. Then the I/O world collapsed for a little while as the new OSDs were backfilled.
Some of those might be redundant and/or unnecessary. I'm not a ceph expert. Do this at your own risk. Etc.
jonathan
On Wed, Jan 9, 2019 at 7:58 AM Mosi Thaunot <pourlesmails@xxxxxxxxx> wrote:
Hello,_______________________________________________I have a cluster of 3 nodes, 3 OSD per nodes (so 9 OSD in total), replication set to 3 (os each node has a copy).For some reason, I would like to recreate the node 1. What I have done :1. out the 3 OSDs of node 1, stop then, then destroy them (almost in the same time)2. recreate the new node 1 and add the 3 new OSDsMy problem is that after step 1, I had to wait for backfilling to complete (to get only active+clean+remapped and active+undersized+degraded PGs). Then, wait again in step 2 to get the cluster healthy.Could I avoid the wait of step 1 ? What should I do then ? I was thinking :- set the OSDs to noout- out/stop/destroy the 3 OSDs of node 1 (in the same time)- reinstall node 1 (I have a copy of all the configuration files) and add the 3 nodesWould that work ?Thanks and regards,Mosi
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com