Re: RGW Pool uses way more space than it should be

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ID   CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME
 -1         48.00000         -   29 TiB   20 TiB   18 TiB  2.2 GiB  324 GiB  8.3 TiB  70.97  1.00    -          root default
-15         16.00000         -  9.6 TiB  6.8 TiB  6.0 TiB  548 MiB  108 GiB  2.8 TiB  71.25  1.00    -              datacenter datacenter1
 -3          8.00000         -  4.8 TiB  3.4 TiB  3.0 TiB  331 MiB   55 GiB  1.4 TiB  70.85  1.00    -                  host prod-osd-101
  0    hdd   1.00000   1.00000  613 GiB  415 GiB  361 GiB   53 MiB  5.6 GiB  198 GiB  67.65  0.95   49      up              osd.0
  1    hdd   1.00000   1.00000  613 GiB  406 GiB  352 GiB   65 MiB  7.1 GiB  207 GiB  66.22  0.93   51      up              osd.1
  2    hdd   1.00000   1.00000  613 GiB  408 GiB  354 GiB   16 MiB  8.1 GiB  205 GiB  66.57  0.94   50      up              osd.2
  3    hdd   1.00000   1.00000  613 GiB  424 GiB  370 GiB   36 MiB  7.0 GiB  189 GiB  69.16  0.97   48      up              osd.3
  4    hdd   1.00000   1.00000  613 GiB  469 GiB  415 GiB      0 B  7.2 GiB  143 GiB  76.60  1.08   52      up              osd.4
  5    hdd   1.00000   1.00000  613 GiB  443 GiB  389 GiB   48 MiB  6.8 GiB  170 GiB  72.28  1.02   56      up              osd.5
  6    hdd   1.00000   1.00000  613 GiB  461 GiB  407 GiB   35 MiB  6.1 GiB  152 GiB  75.15  1.06   47      up              osd.6
  7    hdd   1.00000   1.00000  613 GiB  449 GiB  395 GiB   78 MiB  7.1 GiB  164 GiB  73.21  1.03   50      up              osd.7
 -5          8.00000         -  4.8 TiB  3.4 TiB  3.0 TiB  217 MiB   54 GiB  1.4 TiB  71.65  1.01    -                  host prod-osd-102
  8    hdd   1.00000   1.00000  613 GiB  459 GiB  405 GiB   90 MiB  7.5 GiB  154 GiB  74.88  1.06   59      up              osd.8
  9    hdd   1.00000   1.00000  613 GiB  461 GiB  407 GiB  3.6 MiB  5.9 GiB  152 GiB  75.13  1.06   54      up              osd.9
 10    hdd   1.00000   1.00000  613 GiB  346 GiB  292 GiB      0 B  5.9 GiB  266 GiB  56.52  0.80   52      up              osd.10
 11    hdd   1.00000   1.00000  613 GiB  466 GiB  412 GiB   36 MiB  7.7 GiB  147 GiB  76.06  1.07   54      up              osd.11
 12    hdd   1.00000   1.00000  613 GiB  459 GiB  405 GiB   50 MiB  6.6 GiB  154 GiB  74.89  1.06   51      up              osd.12
 13    hdd   1.00000   1.00000  613 GiB  455 GiB  401 GiB   20 MiB  6.0 GiB  158 GiB  74.30  1.05   48      up              osd.13
 14    hdd   1.00000   1.00000  613 GiB  461 GiB  407 GiB   18 MiB  7.2 GiB  152 GiB  75.18  1.06   48      up              osd.14
 15    hdd   1.00000   1.00000  613 GiB  406 GiB  352 GiB      0 B  6.8 GiB  207 GiB  66.25  0.93   48      up              osd.15
-16         16.00000         -  9.6 TiB  6.8 TiB  6.0 TiB  1.2 GiB  108 GiB  2.8 TiB  71.26  1.00    -              datacenter datacenter2
 -9          8.00000         -  4.8 TiB  3.4 TiB  3.0 TiB  404 MiB   55 GiB  1.4 TiB  71.48  1.01    -                  host prod-osd-201
 24    hdd   1.00000   1.00000  613 GiB  428 GiB  374 GiB   18 MiB  7.4 GiB  185 GiB  69.87  0.98   54      up              osd.24
 25    hdd   1.00000   1.00000  613 GiB  422 GiB  368 GiB   20 MiB  6.5 GiB  191 GiB  68.84  0.97   44      up              osd.25
 26    hdd   1.00000   1.00000  613 GiB  426 GiB  372 GiB   88 MiB  8.2 GiB  187 GiB  69.46  0.98   48      up              osd.26
 27    hdd   1.00000   1.00000  613 GiB  461 GiB  407 GiB   33 MiB  6.8 GiB  152 GiB  75.20  1.06   52      up              osd.27
 28    hdd   1.00000   1.00000  613 GiB  465 GiB  411 GiB  126 MiB  7.2 GiB  148 GiB  75.82  1.07   55      up              osd.28
 29    hdd   1.00000   1.00000  613 GiB  452 GiB  398 GiB   32 MiB  6.3 GiB  161 GiB  73.74  1.04   52      up              osd.29
 30    hdd   1.00000   1.00000  613 GiB  436 GiB  382 GiB   51 MiB  7.0 GiB  177 GiB  71.14  1.00   56      up              osd.30
 31    hdd   1.00000   1.00000  613 GiB  415 GiB  361 GiB   36 MiB  5.9 GiB  198 GiB  67.74  0.95   48      up              osd.31
-11          8.00000         -  4.8 TiB  3.4 TiB  3.0 TiB  817 MiB   52 GiB  1.4 TiB  71.04  1.00    -                  host prod-osd-202
 32    hdd   1.00000   1.00000  613 GiB  459 GiB  405 GiB   20 MiB  6.4 GiB  154 GiB  74.88  1.06   48      up              osd.32
 33    hdd   1.00000   1.00000  613 GiB  360 GiB  306 GiB  662 MiB  6.5 GiB  253 GiB  58.80  0.83   56      up              osd.33
 34    hdd   1.00000   1.00000  613 GiB  459 GiB  405 GiB   49 MiB  7.1 GiB  154 GiB  74.90  1.06   57      up              osd.34
 35    hdd   1.00000   1.00000  613 GiB  415 GiB  361 GiB      0 B  6.2 GiB  198 GiB  67.76  0.95   45      up              osd.35
 36    hdd   1.00000   1.00000  613 GiB  413 GiB  359 GiB      0 B  6.1 GiB  200 GiB  67.39  0.95   39      up              osd.36
 37    hdd   1.00000   1.00000  613 GiB  462 GiB  408 GiB   33 MiB  6.8 GiB  151 GiB  75.44  1.06   52      up              osd.37
 38    hdd   1.00000   1.00000  613 GiB  450 GiB  396 GiB   37 MiB  6.9 GiB  163 GiB  73.45  1.03   53      up              osd.38
 39    hdd   1.00000   1.00000  613 GiB  464 GiB  410 GiB   16 MiB  6.4 GiB  149 GiB  75.73  1.07   58      up              osd.39
 -2         16.00000         -  9.6 TiB  6.7 TiB  5.9 TiB  512 MiB  108 GiB  2.8 TiB  70.40  0.99    -              datacenter datacenter3
 -7          8.00000         -  4.8 TiB  3.4 TiB  3.0 TiB  241 MiB   54 GiB  1.3 TiB  71.99  1.01    -                  host prod-osd-301
 16    hdd   1.00000   1.00000  613 GiB  456 GiB  402 GiB   36 MiB  7.8 GiB  157 GiB  74.32  1.05   55      up              osd.16
 17    hdd   1.00000   1.00000  613 GiB  459 GiB  405 GiB   36 MiB  5.8 GiB  153 GiB  74.96  1.06   49      up              osd.17
 18    hdd   1.00000   1.00000  613 GiB  433 GiB  379 GiB      0 B  6.1 GiB  180 GiB  70.61  0.99   47      up              osd.18
 19    hdd   1.00000   1.00000  613 GiB  435 GiB  381 GiB   30 MiB  7.1 GiB  178 GiB  70.91  1.00   53      up              osd.19
 20    hdd   1.00000   1.00000  613 GiB  461 GiB  407 GiB   18 MiB  7.7 GiB  152 GiB  75.14  1.06   47      up              osd.20
 21    hdd   1.00000   1.00000  613 GiB  436 GiB  382 GiB   50 MiB  7.5 GiB  177 GiB  71.13  1.00   56      up              osd.21
 22    hdd   1.00000   1.00000  613 GiB  427 GiB  373 GiB   17 MiB  5.8 GiB  186 GiB  69.66  0.98   44      up              osd.22
 23    hdd   1.00000   1.00000  613 GiB  424 GiB  370 GiB   54 MiB  6.3 GiB  189 GiB  69.16  0.97   52      up              osd.23
-13          8.00000         -  4.8 TiB  3.3 TiB  2.9 TiB  272 MiB   54 GiB  1.5 TiB  68.82  0.97    -                  host prod-osd-302
 40    hdd   1.00000   1.00000  613 GiB  242 GiB  188 GiB    2 KiB  5.1 GiB  371 GiB  39.41  0.56   48      up              osd.40
 41    hdd   1.00000   1.00000  613 GiB  425 GiB  371 GiB   40 MiB  7.0 GiB  188 GiB  69.40  0.98   52      up              osd.41
 42    hdd   1.00000   1.00000  613 GiB  424 GiB  370 GiB   32 MiB  6.4 GiB  189 GiB  69.17  0.97   48      up              osd.42
 43    hdd   1.00000   1.00000  613 GiB  460 GiB  406 GiB   36 MiB  6.8 GiB  153 GiB  75.10  1.06   51      up              osd.43
 44    hdd   1.00000   1.00000  613 GiB  448 GiB  394 GiB   41 MiB  6.5 GiB  165 GiB  73.10  1.03   55      up              osd.44
 45    hdd   1.00000   1.00000  613 GiB  462 GiB  408 GiB   32 MiB  7.7 GiB  151 GiB  75.44  1.06   56      up              osd.45
 46    hdd   1.00000   1.00000  613 GiB  454 GiB  400 GiB   36 MiB  6.6 GiB  159 GiB  74.04  1.04   52      up              osd.46
 47    hdd   1.00000   1.00000  613 GiB  459 GiB  405 GiB   56 MiB  7.9 GiB  154 GiB  74.90  1.06   52      up              osd.47
                         TOTAL   29 TiB   20 TiB   18 TiB  2.2 GiB  324 GiB  8.3 TiB  70.97
MIN/MAX VAR: 0.56/1.08  STDDEV: 6.26


> On 8. Apr 2022, at 10:05, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
> 
> How does "ceph osd df tree" look?
> 
> Den fre 8 apr. 2022 kl 09:58 skrev Hendrik Peyerl <hpeyerl@xxxxxxxxxxxx>:
>> 
>> My Screenshot didn’t make it into the mail, this is the output of ceph df:
>> 
>> --- RAW STORAGE ---
>> CLASS    SIZE    AVAIL    USED  RAW USED  %RAW USED
>> hdd    29 TiB  8.3 TiB  20 TiB    20 TiB      70.95
>> TOTAL  29 TiB  8.3 TiB  20 TiB    20 TiB      70.95
>> 
>> --- POOLS ---
>> POOL                        ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
>> cephfs_data                  1   32   37 GiB  589.95k  168 GiB   3.01    1.8 TiB
>> cephfs_metadata              2   32  1.0 GiB  349.74k  3.0 GiB   0.06    1.8 TiB
>> .rgw.root                    3   32  1.5 KiB        7  1.3 MiB      0    1.8 TiB
>> default.rgw.control          4   32      0 B        8      0 B      0    1.8 TiB
>> default.rgw.meta             5   32  1.2 MiB       70   15 MiB      0    1.8 TiB
>> default.rgw.log              6   32   10 KiB      227  9.7 MiB      0    1.8 TiB
>> default.rgw.buckets.index    7   32  207 MiB      634  622 MiB   0.01    1.8 TiB
>> default.rgw.buckets.non-ec   8   32      0 B        0      0 B      0    1.8 TiB
>> default.rgw.buckets.data     9  512  3.0 TiB   44.83M   13 TiB  71.35    1.8 TiB
>> cephfs.test123.meta         10   16      0 B        0      0 B      0    1.8 TiB
>> cephfs.test123.data         11   32      0 B        0      0 B      0    1.8 TiB
>> device_health_metrics       12    1   40 MiB       63  119 MiB      0    1.8 TiB
>> 
>> 
>>> On 8. Apr 2022, at 09:54, Hendrik Peyerl <hpeyerl@xxxxxxxxxxxx> wrote:
>>> 
>>> Hi everyone,
>>> 
>>> I have a strange issue and I can’t figure out what seems to be blocking my disk space:
>>> 
>>> 
>>> 
>>> I have a total of 29 TB (48x 600GB) with a replication of 3, which results in around 9,6TB of „real“ storage space.
>>> 
>>> I am currently only using it mainly for the RGW. I have a total of around 3TB of files in buckets.
>>> 
>>> Then again the total usage is 20TB RAW, which would mean that I have around 6,6TB used - but I can’t find more than 3TB.
>>> 
>>> Any Ideas where I can find my missing space?
>>> 
>>> Thanks in advance,
>>> 
>>> Hendrik
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> 
> -- 
> May the most significant bit of your life be positive.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux