Hi Sage, >Jewel and the latest hammer point release have an improved >reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry > run) to correct this. Thank you....But not planning to upgrade the cluster soon. So, in this case - are there any tunable options will help? like "crush tunable optimal" or so? OR any other configuration options change will help? Thanks Swami On Tue, Jun 7, 2016 at 6:00 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote: > On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: >> Hello, >> I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled >> with >85% of data and few OSDs filled with ~60%-70% of data. >> >> Any reason why the unevenly OSDs filling happned? do I need to any >> tweaks on configuration to fix the above? Please advise. >> >> PS: Ceph version is - 0.80.7 > > Jewel and the latest hammer point release have an improved > reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry > run) to correct this. > > sage > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html