If you want to reweight only once when you have a failed disk that is being balanced off of, set the crush weight for that osd to 0.0. Then when you fully remove the disk from
the cluster it will not do any additional backfilling. Any change to the crush map will likely move data around, even if you're removing an already "removed" osd.
From: M Ranga Swami Reddy [swamireddy@xxxxxxxxx]
Sent: Thursday, December 01, 2016 11:45 PM To: David Turner Cc: ceph-users Subject: Re: [ceph-users] node and its OSDs down... Hi David - Yep, I did the "ceph
osd crush
remove
osd.<id>", which started the recovery.
My worries is - why Ceph is doing the recovery, if an OSD
is already
down and no more in the cluster. That means,
ceph already maintained down OSDs objects copied to another OSDs.. here is the ceph osd tree o/p:
===
227 0.91 .... 250 0.91 ===
So to avoid the recovery/rebalance , can I set the weight of OSD (which was in down state). But is this weight setting also lead to rebalance activity.
Thanks Swami
On Thu, Dec 1, 2016 at 8:07 PM, David Turner
<david.turner@xxxxxxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com