Re: MAX AVAIL capacity mismatch || mimic(13.2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Our total number of hdd-OSD is 40. 40X5.5TB=220. we are using 3 replica for
every pool. So,  "Max avail" should show 220/3= 73.3. Am I right?

what is the meaning of "variance 1.x". I think we might have wrong
configuration , but need to find it.

We have some more SSD-OSD, , yeah total capacity is showing by calculating
hdd+ssd. but pool wise max  available should difference.


# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
 0   hdd 5.57100  1.00000 5.6 TiB 2.1 TiB 3.5 TiB 37.74 1.35 871
 1   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 34.25 1.22 840
 2   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 31.53 1.13 831
 3   hdd 5.57100  1.00000 5.6 TiB 2.2 TiB 3.4 TiB 38.80 1.39 888
 4   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.22 1.19 866
 5   hdd 5.57100  1.00000 5.6 TiB 2.0 TiB 3.6 TiB 36.12 1.29 837
 6   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 32.12 1.15 858
 7   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.9 TiB 29.63 1.06 851
 8   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.57 1.20 799
 9   hdd 5.57100  1.00000 5.6 TiB 1.6 TiB 4.0 TiB 28.73 1.03 793
10   hdd 5.57100  1.00000 5.6 TiB 1.6 TiB 3.9 TiB 29.51 1.05 839
11   hdd 5.57100  1.00000 5.6 TiB 2.0 TiB 3.6 TiB 36.19 1.29 860
12   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.61 1.20 904
13   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 32.52 1.16 807
14   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 34.17 1.22 845
15   hdd 5.57100  1.00000 5.6 TiB 2.1 TiB 3.5 TiB 37.61 1.34 836
16   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.8 TiB 31.12 1.11 881
17   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 32.66 1.17 876
18   hdd 5.57100  1.00000 5.6 TiB 2.4 TiB 3.2 TiB 42.29 1.51 860
19   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.9 TiB 29.93 1.07 828
20   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.6 TiB 34.65 1.24 854
21   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.62 1.20 845
22   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.21 1.19 797
23   hdd 5.57100  1.00000 5.6 TiB 2.0 TiB 3.5 TiB 36.75 1.31 839
24   hdd 5.57100  1.00000 5.6 TiB 2.1 TiB 3.5 TiB 36.98 1.32 829
25   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.9 TiB 30.86 1.10 878
26   hdd 5.57100  1.00000 5.6 TiB 2.0 TiB 3.5 TiB 36.68 1.31 867
27   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.8 TiB 31.13 1.11 842
28   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 32.12 1.15 821
29   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.44 1.19 871
30   hdd 5.57100  1.00000 5.6 TiB 2.0 TiB 3.6 TiB 35.97 1.29 813
31   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.9 TiB 30.60 1.09 812
32   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.6 TiB 34.65 1.24 836
33   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 31.57 1.13 884
34   hdd 5.57100  1.00000 5.6 TiB 2.0 TiB 3.5 TiB 36.67 1.31 829
35   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.6 TiB 34.79 1.24 900
36   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.76 1.21 838
37   hdd 5.57100  1.00000 5.6 TiB 2.1 TiB 3.4 TiB 38.21 1.37 796
38   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.8 TiB 31.26 1.12 841
39   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.76 1.21 830
40   ssd 1.81898  1.00000 1.8 TiB  22 GiB 1.8 TiB  1.18 0.04 112
42   ssd 1.81879  1.00000 1.8 TiB  21 GiB 1.8 TiB  1.12 0.04 107
43   ssd 1.81879  1.00000 1.8 TiB  24 GiB 1.8 TiB  1.27 0.05 121
44   ssd 1.81879  1.00000 1.8 TiB  20 GiB 1.8 TiB  1.06 0.04 101
45   ssd 1.81879  1.00000 1.8 TiB  23 GiB 1.8 TiB  1.24 0.04 116
46   ssd 1.81879  1.00000 1.8 TiB  24 GiB 1.8 TiB  1.27 0.05 120
47   ssd 1.81879  1.00000 1.8 TiB  22 GiB 1.8 TiB  1.17 0.04 110
48   ssd 1.81879  1.00000 1.8 TiB  23 GiB 1.8 TiB  1.26 0.04 120
49   ssd 1.81879  1.00000 1.8 TiB  23 GiB 1.8 TiB  1.21 0.04 117
41   ssd 1.81898  1.00000 1.8 TiB  18 GiB 1.8 TiB  0.97 0.03  94
50   ssd 1.81940  1.00000 1.8 TiB  22 GiB 1.8 TiB  1.19 0.04 115
51   ssd 1.81940  1.00000 1.8 TiB  19 GiB 1.8 TiB  1.03 0.04  98
52   ssd 1.81940  1.00000 1.8 TiB  22 GiB 1.8 TiB  1.16 0.04 109
53   ssd 1.81940  1.00000 1.8 TiB  21 GiB 1.8 TiB  1.13 0.04 105
54   ssd 1.81940  1.00000 1.8 TiB  25 GiB 1.8 TiB  1.36 0.05 128
55   ssd 1.81940  1.00000 1.8 TiB  22 GiB 1.8 TiB  1.19 0.04 113
56   ssd 1.81940  1.00000 1.8 TiB  27 GiB 1.8 TiB  1.43 0.05 140
57   ssd 1.81940  1.00000 1.8 TiB  24 GiB 1.8 TiB  1.29 0.05 122
58   ssd 1.81940  1.00000 1.8 TiB  21 GiB 1.8 TiB  1.13 0.04 107
59   ssd 1.81940  1.00000 1.8 TiB  21 GiB 1.8 TiB  1.12 0.04 111
60   ssd 1.81940  1.00000 1.8 TiB  27 GiB 1.8 TiB  1.45 0.05 137
61   ssd 1.81940  1.00000 1.8 TiB  23 GiB 1.8 TiB  1.24 0.04 117
62   ssd 1.81940  1.00000 1.8 TiB  22 GiB 1.8 TiB  1.16 0.04 112
63   ssd 1.81940  1.00000 1.8 TiB  25 GiB 1.8 TiB  1.32 0.05 126
64   ssd 1.81940  1.00000 1.8 TiB  23 GiB 1.8 TiB  1.23 0.04 115
65   ssd 1.81940  1.00000 1.8 TiB  20 GiB 1.8 TiB  1.07 0.04  99
66   ssd 1.81940  1.00000 1.8 TiB  19 GiB 1.8 TiB  1.03 0.04 100
                    TOTAL 272 TiB  76 TiB 196 TiB 27.99


# ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    272 TiB     196 TiB       76 TiB         28.02
POOLS:
    NAME                           ID     USED        %USED     MAX AVAIL
  OBJECTS
    images                         16     243 GiB      0.60        39 TiB
    31963
    volumes                        17      23 TiB     36.72        39 TiB
  6007732
    vms                            18      82 MiB         0        39 TiB
     1958
    gnocchi                        32     1.7 GiB         0        39 TiB
   185082
    .rgw.root                      35     1.1 KiB         0        39 TiB
        4
    default.rgw.control            36         0 B         0        39 TiB
        8
    default.rgw.meta               37      37 KiB         0        39 TiB
      189
    default.rgw.log                38         0 B         0        39 TiB
      207
    default.rgw.buckets.index      39         0 B         0        39 TiB
       66
    default.rgw.buckets.data       40     930 GiB      2.27        39 TiB
   322252
    default.rgw.buckets.non-ec     49         0 B         0        39 TiB
        4
    volumes-ssd                    50     191 GiB      1.21        15 TiB
    49070

Regards,
Munna


On Wed, Dec 15, 2021 at 1:16 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

> Den ons 15 dec. 2021 kl 07:45 skrev Md. Hejbul Tawhid MUNNA
> <munnaeebd@xxxxxxxxx>:
> > Hi,
> > We are observing MAX-Available capacity is not reflecting the full size
> of
> > the cluster.
>
> Max avail is dependent on several factors, one is that the OSD with
> the least free space will be the one used for calculating it, just
> because it could happen that all writes to a pool for randomness
> reasons end up on that one single OSD, and hence the "promise" is
> shown for the worst case. Normally this will not happen, but it could.
> Secondly, the max-avail is per pool, and based on replication factor
> on that single pool. A size=2 pool will show more avail than a size=5
> pool, because writes to the size=5 pool obviously eats 5x the written
> data and the size=2 only uses twice the amount.
> Given the total "197TB avail" and a guess at replication size is set
> to 3, the max-avail should end up close to 197/3, but since OSD 18 has
> some 12% more data on it than OSD 9, the math probably considers the
> per-pool MAX AVAIL as if all OSDs were like OSD 18, but TOTAL AVAIL
> will of course count free space on OSD 9 too.
>
> > # ceph df
> > GLOBAL:
> >     SIZE        AVAIL       RAW USED     %RAW USED
> >     272 TiB     197 TiB       75 TiB         27.68
> > POOLS:
> >     NAME                           ID     USED        %USED     MAX AVAIL
> >   OBJECTS
> >     images                         16     243 GiB      0.60        39 TiB
> >     31955
> >     volumes                        17      22 TiB     36.34        39 TiB
> >   5951619
>
>
> 37   hdd 5.57100  1.00000 5.6 TiB 2.1 TiB 3.5 TiB 37.92 1.37 796
> 38   hdd 5.57100  1.00000 5.6 TiB 1.7 TiB 3.8 TiB 31.03 1.12 841
> 39   hdd 5.57100  1.00000 5.6 TiB 1.9 TiB 3.7 TiB 33.51 1.21 830
>
> It's somewhat interesting to see that every OSD is at variance 1.x,
> meaning all of them report as if they are above average in terms of
> having data. Was this not the complete picture of your OSDs?
> In case there are more OSDs (ssds,nvmes) then of course they will
> count as TOTAL AVAIL for the "ceph df" command, even if all pools
> can't use those due to crush rules.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux