osd crush reweight 0 on "out" OSD causes backfilling?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm in the process of decommissioning some OSDs and thought I'd
previously migrated all data off them by marking them "out" (which did
trigger a fair amount of remapping as expected).

Looking at the pgmap ('ceph pg dump') confirmed that none of the "out"
OSDs was hosting any more PGs (columns 'up' and 'acting').

I thought my next prudent step before taking the OSDs down and removing
them from the crushmap was to reweight them to 0.  To my surprise this
caused a flurry of remapping/backfilling.

Is this expected and if so, what am I missing?  This is an old Firefly
cluster (purpose of taking out the OSDs is to repurpose them into a
Luminous cluster we're building...).

Assuming this has no useful function, would I be good to take them out
using 'osd crush remove' then 'osd rm' without re-weighting them to 0 first?

Your insight is much appreciated!

Cheers
Christian

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux