Re: MAX AVAIL capacity mismatch || mimic(13.2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 15 dec. 2021 kl 07:45 skrev Md. Hejbul Tawhid MUNNA
<munnaeebd@xxxxxxxxx>:
> Hi,
> We are observing MAX-Available capacity is not reflecting the full size of
> the cluster.

Max avail is dependent on several factors, one is that the OSD with
the least free space will be the one used for calculating it, just
because it could happen that all writes to a pool for randomness
reasons end up on that one single OSD, and hence the "promise" is
shown for the worst case. Normally this will not happen, but it could.
Secondly, the max-avail is per pool, and based on replication factor
on that single pool. A size=2 pool will show more avail than a size=5
pool, because writes to the size=5 pool obviously eats 5x the written
data and the size=2 only uses twice the amount.
Given the total "197TB avail" and a guess at replication size is set
to 3, the max-avail should end up close to 197/3, but since OSD 18 has
some 12% more data on it than OSD 9, the math probably considers the
per-pool MAX AVAIL as if all OSDs were like OSD 18, but TOTAL AVAIL
will of course count free space on OSD 9 too.

> # ceph df
> GLOBAL:
>     SIZE        AVAIL       RAW USED     %RAW USED
>     272 TiB     197 TiB       75 TiB         27.68
> POOLS:
>     NAME                           ID     USED        %USED     MAX AVAIL
>   OBJECTS
>     images                         16     243 GiB      0.60        39 TiB
>     31955
>     volumes                        17      22 TiB     36.34        39 TiB
>   5951619


37   hdd 5.57100  1.00000 5.6 TiB 2.1 TiB 3.5 TiB 37.92 1.37 796
38   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.8 TiB 31.03 1.12 841
39   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.51 1.21 830

It's somewhat interesting to see that every OSD is at variance 1.x,
meaning all of them report as if they are above average in terms of
having data. Was this not the complete picture of your OSDs?
In case there are more OSDs (ssds,nvmes) then of course they will
count as TOTAL AVAIL for the "ceph df" command, even if all pools
can't use those due to crush rules.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux