Hi all,
We are still very new to running a Ceph cluster and have run a RGW cluster for a while now (6-ish mo), it mainly holds large DB backups (Write once, read once, delete after N days). The system is now warning us about an OSD that is near_full and so we went to look at the usage across OSDs. We are somewhat surprised at how imbalanced the usage is across the OSDs, with the lowest usage at 22% full, the highest at nearly 90%, and an almost linear usage pattern across the OSDs (though it looks to step in roughly 5% increments):
[root@carf-ceph-osd01 ~]# ceph osd df | sort -nk8
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
77 hdd 7.27730 1.00000 7451G 1718G 5733G 23.06 0.43 32
73 hdd 7.27730 1.00000 7451G 1719G 5732G 23.08 0.43 31
ceph osd test-reweight-by-utilization
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com