Re: ceph df: pool stored vs bytes_used -- raw or not?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I can confirm that we still occasionally see stored==used even with
14.2.21, but I didn't have time yet to debug the pattern behind the
observations. I'll let you know if we find anything useful.

.. Dan



On Thu, May 20, 2021, 6:56 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:

>
>
> > On 20 May 2021, at 19:47, Igor Fedotov <ifedotov@xxxxxxx> wrote:
> >
> > which PR/ticket are you referring to?
>
>
> This thread
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/QMHLZOAY7LWGBYA5UK53SZSHLRTPTAQY/
> <
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/QMHLZOAY7LWGBYA5UK53SZSHLRTPTAQY/
> >
>
> And this ticket
> https://tracker.ceph.com/issues/48385 <
> https://tracker.ceph.com/issues/48385>
>
> All pools show me stored == used, even if all osd in/up/weighted on 14.2.21
>
> RAW STORAGE:
>     CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
>     hdd       595 TiB     334 TiB     261 TiB      261 TiB         43.87
>     nvme       52 TiB      34 TiB      18 TiB       18 TiB         34.68
>     TOTAL     647 TiB     368 TiB     279 TiB      279 TiB         43.13
>
> POOLS:
>     POOL                           ID     PGS      STORED      OBJECTS
>  USED        %USED     MAX AVAIL
>     replicated_rbd                  1     1024      29 TiB       7.74M
>   29 TiB     10.93        80 TiB
>     erasure_rbd_meta                6       16     2.9 MiB         154
>  2.9 MiB         0       7.3 TiB
>     erasure_rbd_data                7      512      44 TiB      11.64M
>   44 TiB     15.61       144 TiB
>     .rgw.root                      12       16      11 KiB          22
>   11 KiB         0        80 TiB
>     default.rgw.control            13       16         0 B           8
>      0 B         0        80 TiB
>     default.rgw.meta               14       16      45 KiB         211
>   45 KiB         0       7.3 TiB
>     default.rgw.log                15       16      88 MiB         345
>   88 MiB         0        80 TiB
>     default.rgw.buckets.index      16       16     6.3 GiB         717
>  6.3 GiB      0.03       7.3 TiB
>     default.rgw.buckets.non-ec     17       16     810 KiB       5.74k
>  810 KiB         0        80 TiB
>     default.rgw.buckets.data       18      512      15 TiB      33.99M
>   15 TiB      5.70        80 TiB
>     replicated_rbd_nvme            19      256     3.3 TiB     880.95k
>  3.3 TiB     13.25       7.3 TiB
>     unredundant_rbd_nvme           20      128     3.9 TiB       1.03M
>  3.9 TiB     15.27        11 TiB
>     fs_data                        21      256      11 TiB      12.32M
>   11 TiB      4.38        80 TiB
>     fs_meta                        22       16     4.8 GiB     636.79k
>  4.8 GiB      0.02       7.3 TiB
>     fs_data_nvme                   23       64      20 GiB       5.97k
>   20 GiB      0.09       7.3 TiB
>     kubernetes_rbd                 24       32     142 GiB      36.71k
>  142 GiB      0.06        80 TiB
>
>
> Thanks,
> k
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux