Re: ceph-df free discrepancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That definitely makes sense.

However should a hybrid pool not have 3x available of that of all SSD pool?
There is plenty of rust behind it that won't impede it being able to satisfy all 3 replicants.

Example lets say I write 5.6TiB (current max avail):
to a hybrid pool, that's 5.6TiB to ssd osds, and 11.2TiB to hdd osds.
to an all ssd pool, that's 16.8TiB written to ssd osd.
And those are vastly different amounts to the ssd osds, obviously, and feel like the max avail is misleading, at least for these hybrid pools, which are admittedly less common for ceph I imagine.

But its worth noting I'm just using crush roots to point my crush rules, and not actually using the device class, although it is set properly.
And I imagine that if someone had an oddly specific crush rulesets to direct pg distribution, they too could see weird (possibly misleading) results like this. 

Reed

> On Apr 10, 2020, at 5:55 PM, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> 
> On Sat, Apr 11, 2020 at 12:43 AM Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
>> That said, as a straw man argument, ~380GiB free, times 60 OSDs, should be ~22.8TiB free, if all OSD's grew evenly, which they won't
> 
> Yes, that's the problem. They won't grow evenly. The fullest one will
> grow faster than the others. Also, your full-ratio is probably 95% not
> 100%.
> So it'll be full as soon as OSD 70 takes another ~360 GB of data. But
> the others won't take 360 GB of data but less because of the bad
> balancing. For example, OSD 28 will only get around 233 GB of data by
> the time OSD 70 has 360 GB.
> 
> 
> 
> Paul
> 
>> , which is still far short of 37TiB raw free, as expected.
>> However, what doesn't track is the 5.6TiB available at the pools level, even for a 3x replicated pool (5.6*3=16.8TiB, which is about 34% less than my napkin math, which would be 22.8/3=7.6TiB.
>> But what tracks even less is the hybrid pools, which use 1/3 of what the 3x-replicated data consumes.
>> Meaning if my napkin math is right, should show ~22.8TiB free.
>> 
>> Am I grossly mis-understanding how this is calculated?
>> Maybe this is fixed in Octopus?
>> 
>> Just trying to get a grasp on what I'm seeing not matching expectations.
>> 
>> Thanks,
>> 
>> Reed
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux