Hello I have a ceph cluster of 6 osd 146gb each. I have copied a lot of data filling to 87%. Between the osd's the data is not evenly distributed host1 /dev/sdb 137G 119G 15G 90% /data/osd0 /dev/sdc 137G 126G 7.4G 95% /data/osd1 host2 /dev/sdc 137G 114G 21G 85% /data/osd2 /dev/sdd 137G 130G 3.6G 98% /data/osd3 host3 /dev/sdb 137G 107G 27G 81% /data/osd4 /dev/sdc 137G 98G 36G 74% /data/osd5 During the copy i got I/O error, but after restarting the cluster it seems fine. By some reason osd3 seems to have much more data than osd5. Is there a way of geting the data distributed better ?. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html