Re: Free space in ec-pool should I worry?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Why do you think it’s used at 91%?

Ceph reports 47.51% usage for this pool.

-
Etienne Menguy
etienne.menguy@xxxxxxxx




> On 1 Nov 2021, at 18:03, Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> wrote:
> 
> Hi,
> 
> Theoretically my data pool is on 91% used but the fullest osd is on 60%, should In” worry?
> 
> 
> 
> This is the ceph detail:
> 
> --- RAW STORAGE ---
> CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
> nvme    10 TiB  9.3 TiB  292 MiB   1.2 TiB      11.68
> ssd    503 TiB  327 TiB  156 TiB   176 TiB      34.92
> TOTAL  513 TiB  337 TiB  156 TiB   177 TiB      34.44
> 
> --- POOLS ---
> POOL                    ID  PGS  STORED   (DATA)   (OMAP)   OBJECTS  USED     (DATA)   (OMAP)   %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY   USED COMPR  UNDER COMPR
> device_health_metrics    1    1   41 MiB      0 B   41 MiB       54  123 MiB      0 B  123 MiB      0     57 TiB  N/A            N/A              54         0 B          0 B
> .rgw.root                2   32  981 KiB  978 KiB  3.3 KiB      163  4.0 MiB  4.0 MiB  9.8 KiB      0     57 TiB  N/A            N/A             163         0 B          0 B
> sin.rgw.log             19   32   49 GiB  2.6 MiB   49 GiB   40.85k  146 GiB   12 MiB  146 GiB   0.08     57 TiB  N/A            N/A          40.85k         0 B          0 B
> sin.rgw.buckets.index   20   32  416 GiB      0 B  416 GiB   58.88k  1.2 TiB      0 B  1.2 TiB  12.72    2.8 TiB  N/A            N/A          58.88k         0 B          0 B
> sin.rgw.buckets.non-ec  21   32   16 MiB    405 B   16 MiB       15   48 MiB  180 KiB   48 MiB      0     57 TiB  N/A            N/A              15         0 B          0 B
> sin.rgw.meta            22   32  5.5 MiB  300 KiB  5.2 MiB    1.13k   29 MiB   13 MiB   16 MiB      0    2.8 TiB  N/A            N/A           1.13k         0 B          0 B
> sin.rgw.control         23   32      0 B      0 B      0 B        8      0 B      0 B      0 B      0     57 TiB  N/A            N/A               8         0 B          0 B
> sin.rgw.buckets.data    24  128  104 TiB  104 TiB      0 B    1.30G  156 TiB  156 TiB      0 B  47.51    115 TiB  N/A            N/A           1.30G         0 B          0 B
> 
> 
> 
> 
> 
> This is the osd df:
> 
> ID  CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP     META      AVAIL    %USE   VAR   PGS  STATUS
> 36   nvme   1.74660   1.00000  1.7 TiB  209 GiB   47 MiB  208 GiB   776 MiB  1.5 TiB  11.68  0.34   31      up
> 0    ssd  13.97069   1.00000   14 TiB  5.2 TiB  4.6 TiB  8.7 MiB   583 GiB  8.8 TiB  37.00  1.07   39      up
> 8    ssd  13.97069   1.00000   14 TiB  8.1 TiB  7.2 TiB  3.6 GiB   890 GiB  5.9 TiB  57.66  1.67   47      up
> 15    ssd  13.97069   1.00000   14 TiB  2.8 TiB  2.5 TiB   10 MiB   299 GiB   11 TiB  19.69  0.57   19      up
> 18    ssd  13.97069   1.00000   14 TiB  4.7 TiB  4.2 TiB  5.8 MiB   530 GiB  9.2 TiB  33.80  0.98   34      up
> 24    ssd  13.97069   1.00000   14 TiB  3.9 TiB  3.5 TiB  6.6 MiB   477 GiB   10 TiB  28.22  0.82   21      up
> 30    ssd  13.97069   1.00000   14 TiB  4.8 TiB  4.2 TiB  5.4 MiB   545 GiB  9.2 TiB  34.18  0.99   31      up
> 37   nvme   1.74660   1.00000  1.7 TiB  273 GiB   47 MiB  271 GiB   1.3 GiB  1.5 TiB  15.25  0.44   39      up
> 1    ssd  14.55289   1.00000   14 TiB  5.0 TiB  4.4 TiB   15 GiB   576 GiB  9.0 TiB  35.90  1.04   29      up
> 11    ssd  14.55289   1.00000   14 TiB  7.3 TiB  6.5 TiB   15 GiB   798 GiB  6.7 TiB  52.11  1.51   42      up
> 17    ssd  14.55289   1.00000   14 TiB  5.7 TiB  5.1 TiB  7.4 GiB   623 GiB  8.3 TiB  40.84  1.19   39      up
> 23    ssd  14.55289   1.00000   14 TiB  5.1 TiB  4.5 TiB  2.4 GiB   578 GiB  8.9 TiB  36.41  1.06   31      up
> 28    ssd  14.55289   1.00000   14 TiB  4.8 TiB  4.3 TiB  9.8 GiB   524 GiB  9.2 TiB  34.26  0.99   39      up
> 35    ssd  14.55289   1.00000   14 TiB  1.3 TiB  1.2 TiB  4.9 GiB   143 GiB   13 TiB   9.41  0.27   21      up
> 41   nvme   1.74660   1.00000  1.7 TiB  222 GiB   47 MiB  221 GiB   735 MiB  1.5 TiB  12.39  0.36   33      up
> 2    ssd  14.55289   1.00000   14 TiB  4.2 TiB  3.6 TiB   22 GiB   511 GiB  9.8 TiB  29.73  0.86   33      up
> 6    ssd  14.55289   1.00000   14 TiB  2.0 TiB  1.8 TiB   10 MiB   214 GiB   12 TiB  14.02  0.41   20      up
> 13    ssd  14.55289   1.00000   14 TiB  5.2 TiB  4.6 TiB   15 MiB   600 GiB  8.8 TiB  37.14  1.08   30      up
> 19    ssd  14.55289   1.00000   14 TiB  3.6 TiB  3.2 TiB   54 MiB   401 GiB   10 TiB  25.77  0.75   26      up
> 26    ssd  14.55289   1.00000   14 TiB  5.8 TiB  5.2 TiB   14 MiB   635 GiB  8.2 TiB  41.45  1.20   38      up
> 32    ssd  13.97069   1.00000   14 TiB  8.6 TiB  7.7 TiB   16 MiB  1014 GiB  5.3 TiB  61.85  1.80   46      up
> 38   nvme   1.74660   1.00000  1.7 TiB  184 GiB   47 MiB  184 GiB   731 MiB  1.6 TiB  10.31  0.30   26      up
> 5    ssd  14.55289   1.00000   14 TiB  3.1 TiB  2.7 TiB   13 MiB   336 GiB   11 TiB  21.89  0.64   24      up
> 7    ssd  14.55289   1.00000   14 TiB  5.8 TiB  5.2 TiB  7.4 GiB   631 GiB  8.2 TiB  41.47  1.20   42      up
> 14    ssd  14.55289   1.00000   14 TiB  3.8 TiB  3.3 TiB  5.6 MiB   465 GiB   10 TiB  27.11  0.79   24      up
> 20    ssd  14.55289   1.00000   14 TiB  6.3 TiB  5.6 TiB  3.9 MiB   689 GiB  7.7 TiB  44.99  1.31   31      up
> 25    ssd  14.55289   1.00000   14 TiB  3.5 TiB  3.0 TiB  5.3 GiB   460 GiB   10 TiB  24.87  0.72   26      up
> 31    ssd  14.55289   1.00000   14 TiB  6.7 TiB  5.9 TiB   15 GiB   729 GiB  7.3 TiB  47.66  1.38   41      up
> 40   nvme   1.74660   1.00000  1.7 TiB  144 GiB   47 MiB  142 GiB   2.2 GiB  1.6 TiB   8.07  0.23   29      up
> 3    ssd  14.55289   1.00000   14 TiB  4.4 TiB  3.9 TiB   29 MiB   509 GiB  9.6 TiB  31.50  0.91   27      up
> 10    ssd  14.55289   1.00000   14 TiB  4.3 TiB  3.8 TiB  7.3 GiB   546 GiB  9.6 TiB  31.06  0.90   34      up
> 12    ssd  14.55289   1.00000   14 TiB  5.4 TiB  4.8 TiB  2.4 GiB   590 GiB  8.6 TiB  38.66  1.12   32      up
> 21    ssd  14.55289   1.00000   14 TiB  7.3 TiB  6.5 TiB  7.3 GiB   904 GiB  6.6 TiB  52.59  1.53   43      up
> 29    ssd  14.55289   1.00000   14 TiB  3.0 TiB  2.6 TiB  4.7 MiB   322 GiB   11 TiB  21.17  0.61   25      up
> 33    ssd  14.55289   1.00000   14 TiB  5.0 TiB  4.4 TiB   11 MiB   551 GiB  9.0 TiB  35.53  1.03   29      up
> 39   nvme   1.74660   1.00000  1.7 TiB  222 GiB   47 MiB  221 GiB   728 MiB  1.5 TiB  12.40  0.36   34      up
> 4    ssd  14.55289   1.00000   14 TiB  4.5 TiB  4.0 TiB   12 MiB   483 GiB  9.5 TiB  32.05  0.93   41      up
> 9    ssd  14.55289   1.00000   14 TiB  3.5 TiB  3.1 TiB  7.3 GiB   420 GiB   10 TiB  25.00  0.73   21      up
> 16    ssd  14.55289   1.00000   14 TiB  6.5 TiB  5.8 TiB  4.9 GiB   712 GiB  7.5 TiB  46.47  1.35   33      up
> 22    ssd  14.55289   1.00000   14 TiB  5.5 TiB  4.8 TiB  4.9 GiB   650 GiB  8.5 TiB  39.07  1.13   37      up
> 27    ssd  14.55289   1.00000   14 TiB  4.1 TiB  3.7 TiB   12 MiB   446 GiB  9.8 TiB  29.50  0.86   29      up
> 34    ssd  14.55289   1.00000   14 TiB  5.2 TiB  4.6 TiB  4.9 GiB   570 GiB  8.8 TiB  37.21  1.08   31      up
>                        TOTAL  513 TiB  177 TiB  156 TiB  1.4 TiB    19 TiB  337 TiB  34.45                   
> MIN/MAX VAR: 0.23/1.80  STDDEV: 13.56
> 
> 
> 
> 
> This is the pool info:
> pool 24 'sin.rgw.buckets.data' erasure profile data-ec size 6 min_size 5 crush_rule 3 object_hash rjenkins pg_num 128 pgp_num 81 pgp_num_target 128 autoscale_mode warn last_change 19008 lfor 0/0/17477 flags hashpspool stripe_width 16384 application rgw
> 
> (Pg increase in progress btw)
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux