Hopefully someone can sanity check me here, but I'm getting the feeling that the MAX AVAIL in ceph df isn't reporting the correct value in 14.2.8 (mon/mgr/mds are .8, most OSDs are .7)
Specifically, any of my hybrid pools (20,29) or all-SSD pools (16,34). For my hybrid pools, I have a crush rule of take 1 of host in the ssd root, take -1 chassis in the hdd root. For my ssd pools, I have a crush rule of take 0 of host in the ssd root. Now I have 60 ssd osds 1.92T each, and sadly distribution is imperfect (leaving those issues out of this), and I have plenty of underfull and overfull osds, which I am trying to manually reweighs to get my most full's down to free up space:
[SNIP]
That said, as a straw man argument, ~380GiB free, times 60 OSDs, should be ~22.8TiB free, if all OSD's grew evenly, which they won't, which is still far short of 37TiB raw free, as expected. However, what doesn't track is the 5.6TiB available at the pools level, even for a 3x replicated pool (5.6*3=16.8TiB, which is about 34% less than my napkin math, which would be 22.8/3=7.6TiB. But what tracks even less is the hybrid pools, which use 1/3 of what the 3x-replicated data consumes. Meaning if my napkin math is right, should show ~22.8TiB free. Am I grossly mis-understanding how this is calculated? Maybe this is fixed in Octopus? Just trying to get a grasp on what I'm seeing not matching expectations. Thanks, Reed |
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx