Re: ceph-df free discrepancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 11, 2020 at 12:43 AM Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
> That said, as a straw man argument, ~380GiB free, times 60 OSDs, should be ~22.8TiB free, if all OSD's grew evenly, which they won't

Yes, that's the problem. They won't grow evenly. The fullest one will
grow faster than the others. Also, your full-ratio is probably 95% not
100%.
So it'll be full as soon as OSD 70 takes another ~360 GB of data. But
the others won't take 360 GB of data but less because of the bad
balancing. For example, OSD 28 will only get around 233 GB of data by
the time OSD 70 has 360 GB.



Paul

> , which is still far short of 37TiB raw free, as expected.
> However, what doesn't track is the 5.6TiB available at the pools level, even for a 3x replicated pool (5.6*3=16.8TiB, which is about 34% less than my napkin math, which would be 22.8/3=7.6TiB.
> But what tracks even less is the hybrid pools, which use 1/3 of what the 3x-replicated data consumes.
> Meaning if my napkin math is right, should show ~22.8TiB free.
>
> Am I grossly mis-understanding how this is calculated?
> Maybe this is fixed in Octopus?
>
> Just trying to get a grasp on what I'm seeing not matching expectations.
>
> Thanks,
>
> Reed
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux