On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote: > Hello, > I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled > with >85% of data and few OSDs filled with ~60%-70% of data. > > Any reason why the unevenly OSDs filling happned? do I need to any > tweaks on configuration to fix the above? Please advise. > > PS: Ceph version is - 0.80.7 Jewel and the latest hammer point release have an improved reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry run) to correct this. sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html