Re: Ceph v15.2.14 - Dirty Object issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
DIRTY field had been removed in Octopus v15.2.15 if cache tiering is not
used.
Check [1] for the PR and [2] for the release which includes this PR.

[1] https://github.com/ceph/ceph/pull/42862
[2] https://docs.ceph.com/en/latest/releases/octopus/

On Fri, Mar 3, 2023 at 5:39 AM <xadhoom76@xxxxxxxxx> wrote:

> Hi, we have a cluster with this ceph df
>
> --- RAW STORAGE ---
> CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
> hdd    240 GiB  205 GiB   29 GiB    35 GiB      14.43
> hddvm  1.6 TiB  1.2 TiB  277 GiB   332 GiB      20.73
> TOTAL  1.8 TiB  1.4 TiB  305 GiB   366 GiB      19.91
>
> --- POOLS ---
> POOL                   ID  PGS  STORED   (DATA)   (OMAP)   OBJECTS  USED
>    (DATA)   (OMAP)   %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY
> USED COMPR  UNDER COMPR
> device_health_metrics   1    1      0 B      0 B      0 B        0      0
> B      0 B      0 B      0    308 GiB  N/A            N/A                0
>        0 B          0 B
> rbd-pool                2   32    539 B     19 B    520 B        9    539
> B     19 B    520 B      0    462 GiB  N/A            N/A                9
>        0 B          0 B
> cephfs.sharedfs.meta    3   32  299 MiB  190 MiB  109 MiB   87.10k  299
> MiB  190 MiB  109 MiB   0.03    308 GiB  N/A            N/A
>  87.10k         0 B          0 B
> cephfs.sharedfs.data    4   32  2.2 GiB  2.2 GiB      0 B  121.56k  2.2
> GiB  2.2 GiB      0 B   0.23    308 GiB  N/A            N/A
> 121.56k         0 B          0 B
> rbd-pool-proddeb02      5   32  712 MiB  712 MiB    568 B      201  712
> MiB  712 MiB    568 B   0.08    308 GiB  N/A            N/A
> 201         0 B          0 B
>
>
> So as you can see we have 332GB RAW but data really are 299+2.2G+712M
>
> POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
> device_health_metrics   1    1      0 B        0      0 B      0    308 GiB
> rbd-pool                2   32    539 B        9    539 B      0    462 GiB
> cephfs.sharedfs.meta    3   32  299 MiB   87.10k  299 MiB   0.03    308 GiB
> cephfs.sharedfs.data    4   32  2.2 GiB  121.56k  2.2 GiB   0.23    308 GiB
> rbd-pool-proddeb02      5   32  712 MiB      201  712 MiB   0.08    308 GiB
>
> How to clean Dirty ? How is that possible ? any cache issue or not
> committed flush from client ?
> Best regards
> Alessandro
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux