Hi David - Yep, I did the "ceph osd crush remove osd.<id>", which started the recovery.
My worries is - why Ceph is doing the recovery, if an OSD is already down and no more in the cluster. That means, ceph already maintained down OSDs objects copied to another OSDs.. here is the ceph osd tree o/p:
===
227 0.91
....
250 0.91
===
So to avoid the recovery/rebalance , can I set the weight of OSD (which was in down state). But is this weight setting also lead to rebalance activity.
Thanks
Swami
On Thu, Dec 1, 2016 at 8:07 PM, David Turner <david.turner@xxxxxxxxxxxxxxxx> wrote:
I assume you also did ceph osd crush remove osd.<id>. When you removed the osd that was down/out and balanced off of, you changed the weight of the host that it was on which triggers additional backfilling to balance the crush map.
David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943
If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.
From: ceph-users [ceph-users-bounces@lists.ceph.com ] on behalf of M Ranga Swami Reddy [swamireddy@xxxxxxxxx]
Sent: Thursday, December 01, 2016 3:03 AM
To: ceph-users
Subject: node and its OSDs down...
Hello,One of my ceph node with 20 OSDs down...After a couple of hours, ceph health is in OK state.
Now, I tried to remove those OSDs, which were down state from ceph cluster...using the "ceh osd remove osd.<id>"then ceph clsuter started rebalancing...which is strange ..because thsoe OSDs are down for a long time and health also OK..my question - why recovery or reblance started when I remove the OSD (which was down).
ThanksSwami
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com