Hello Sage, Thanks for your fast response. I guess I did not explain the problem properly. We created a pool with 512 PGs and 25 osds. We bring one osd down and out. The purpose of our experiment is to test a single erasure case . However due to load balancing, we observe that in some PGs osds other than the one that was brought down are shuffled and a few are even replaced with new osds. Due to this , the recovery operation considers this as >1 erasure case for some PGs which is undesired (Since only 1 osd was brought down in a cluster and each PG should be affected by at max 1 erasure). Is there a way we can prevent this from happening? Thanking you, Yours sincerely, Elita Lobo On Tue, Feb 7, 2017 at 8:22 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote: > On Tue, 7 Feb 2017, Elita Lobo wrote: >> Hi, >> >> Is there a way to disable load balancing in ceph (in multiple pg case) >> whenever an osd goes down? >> Or can we atleast add a delay before load balancing happens? >> >> >> I am trying to compute the no of erasures in each pg by using the >> command "ceph pg dump" before and after an osd goes down. The >> positions of some of the osds get changed due to load balancing. >> Hence I am unable to get the correct no of erasures due to load >> balancing happening. >> >> Would be grateful if any of the ceph developers could help me figure >> out a way to find the no of erasures in each pg. > > The simplest thing to do is simply > > ceph osd set noout > > which will prevent the rebalancing indefinitely. You can also adjust the > 'mon osd down out interval' setting on the mon (default 10m IIRC) to be > something longer. > > sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html