Re: OSD upgrades

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 6/2/20 5:44 AM, Brent Kennedy wrote:
> We are rebuilding servers and before luminous our process was:
> 
>  
> 
> 1.       Reweight the OSD to 0
> 
> 2.       Wait for rebalance to complete
> 
> 3.       Out the osd
> 
> 4.       Crush remove osd
> 
> 5.       Auth del osd
> 
> 6.       Ceph osd rm #
> 
>  
> 
> Seems the luminous documentation says that you should:
> 
> 1.       Out the osd
> 
> 2.       Wait for the cluster rebalance to finish
> 
> 3.       Stop the osd
> 
> 4.       Osd purge # 
> 
>  
> 
> Is reweighting to 0 no longer suggested?    
> 
>  
> 
> Side note:  I tried our existing process and even after reweight, the entire
> cluster restarted the balance again after step 4 ( crush remove osd ) of the
> old process.  I should also note, by reweighting to 0, when I tried to run
> "ceph osd out #", it said it was already marked out.  
> 
>  
> 
> I assume the docs are correct, but just want to make sure since reweighting
> had been previously recommended.

The new commands just make it more simple. There are many ways to
accomplish the same goal, but what the docs describe should work in most
scenarios.

Wido

> 
>  
> 
> Regards,
> 
> -Brent
> 
>  
> 
> Existing Clusters:
> 
> Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi
> gateways ( all virtual on nvme )
> 
> US Production(HDD): Nautilus 14.2.2 with 11 osd servers, 3 mons, 4 gateways,
> 2 iscsi gateways
> 
> UK Production(HDD): Nautilus 14.2.2 with 12 osd servers, 3 mons, 4 gateways
> 
> US Production(SSD): Nautilus 14.2.2 with 6 osd servers, 3 mons, 3 gateways,
> 2 iscsi gateways
> 
>  
> 
>  
> 
>  
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux