Re: Proper way of removing osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/12/17 10:21, Konstantin Shalygin wrote:
>> Is this the correct way to removes OSDs, or am I doing something wrong ?
> Generic way for maintenance (e.g. disk replace) is rebalance by change osd weight:
> 
> 
> ceph osd crush reweight osdid 0
> 
> cluster migrate data "from this osd"
> 
> 
> When HEALTH_OK you can safe remove this OSD:
> 
> ceph osd out osd_id
> systemctl stop ceph-osd@osd_id
> ceph osd crush remove osd_id
> ceph auth del osd_id
> ceph osd rm osd_id
> 
> 
> 
> k

basically this, when you mark an OSD "out" it stops receiving data and PGs will be remapped but it is still part of the crushmap and influences the weights of buckets - so when you do the final purge your weights will shift and another rebalance occurs. Weighting the OSD to 0 first will ensure you don't incur any extra data movement when you finally purge it.

Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux