I was curious if anyone has filled ceph storage beyond 75%. Admitedly we lost a single host due to power failure and are down 1 host until the replacement parts arrive but outside of that I am seeing disparity between the most and least full osd::
--
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR
MIN/MAX VAR: 0/1.26 STDDEV: 7.12
TOTAL 2178T 1625T 552T 74.63
559 4.54955 1.00000 3724G 2327G 1396G 62.50 0.84
193 2.48537 1.00000 3724G 3406G 317G 91.47 1.23
The crush weights are really off right now but even with a default crush map I am seeing a similar spread::
# osdmaptool --test-map-pgs --pool 1 /tmp/osdmap
avg 82 stddev 10.54 (0.128537x) (expected 9.05095 0.110377x))
min osd.336 55
max osd.54 115
That's with a default weight of 3.000 across all osds. I was wondering if anyone can give me any tips on how to reach closer to 80% full.
We have 630 osds (down one host right now but it will be back in in a week or so) spread across 3 racks of 7 hosts (30 osds each). Our data replication scheme is by rack and we only use S3 (so 98% of our data is in .rgw.buckets pool). We are on hammer (94.7) and using the hammer tunables.
We have 630 osds (down one host right now but it will be back in in a week or so) spread across 3 racks of 7 hosts (30 osds each). Our data replication scheme is by rack and we only use S3 (so 98% of our data is in .rgw.buckets pool). We are on hammer (94.7) and using the hammer tunables.
- Sean: I wrote this. -
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com