Hi Ceph, Last week-end I wrote a script to show which hosts or OSD are the most overused. Then it occurred to me that it would be useful to a sysadmin wanting to keep an eye on the device usage. Some OSDs will fill up more quickly than others, there is no way around it (see http://tracker.ceph.com/issues/15653#detailed-explanation for the details) but that can easily be predicted and more space added to the cluster when necessary. For instance: $ pip install crush $ ceph osd crush dump > crushmap-ceph.json $ crush ceph --convert crushmap-ceph.json > crushmap.json $ crush analyze --type device --rule replicated --crushmap crushmap.json ~id~ ~weight~ ~over/under used~ ~name~ osd.35 35 2.299988 10.400604 osd.2 2 1.500000 10.126750 osd.47 47 2.500000 5.543335 osd.46 46 1.500000 2.956655 osd.29 29 1.784988 2.506855 ... shows that more disks are needed when osd.35 reaches the threshold (80% full for instance). There is a blog post with more details at http://dachary.org/?p=3980 Cheers -- Loïc Dachary, Artisan Logiciel Libre -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html