Re: High usage (DATA column) on dedicated for OMAP only OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Anthony!

Thank you, but I checked it twice and carefully. No one other PGs on these
OSDs.

I did it using ceph pg ls-by-osd XX

ср, 18 сент. 2024 г. в 14:47, Anthony D'Atri <anthony.datri@xxxxxxxxx>:

> Dump your CRUSH rules and compare with `ceph osd dump | grep pool`
>
> I suspect you have a data pool using those SSDs that you don’t realize.
>
> > On Sep 18, 2024, at 7:32 AM, Александр Руденко <a.rudikk@xxxxxxxxx>
> wrote:
> >
> > Hi!
> >
> > We have S3/CephFS cluster with dedicated SSDs for bucket indexes, CephFS
> > metadata and for a few small rgw metadata pools.
> > We have special crush rules for these pools.
> > All of our pools (pool naming is old because it's very old cluster) that
> > are placed on SSDs:
> > POOL                   ID    PGS   STORED  OBJECTS      USED  %USED  MAX
> > AVAIL
> > .rgw.root               1     32  1.3 KiB        5   108 KiB      0
> 690
> > GiB
> > .rgw.control            2     32      0 B        8       0 B      0
> 690
> > GiB
> > .rgw                    3     32  2.0 MiB    8.02k   328 MiB   0.02
> 690
> > GiB
> > .rgw.gc                 4     64  1.6 GiB       64   4.9 GiB   0.24
> 690
> > GiB
> > .users.uid              5     32  1.8 MiB    2.63k    67 MiB      0
> 690
> > GiB
> > .users                  6     32  275 KiB    7.02k   379 MiB   0.02
> 690
> > GiB
> > .usage                  7     32   30 MiB       64    30 MiB      0
> 690
> > GiB
> > .intent-log             8     64      0 B        0       0 B      0
> 690
> > GiB
> > .log                    9     64  6.6 GiB   34.29k    21 GiB   1.01
> 690
> > GiB
> > .rgw.buckets.index     12   4096  1.9 TiB  248.96k   1.9 TiB  48.02
> 690
> > GiB
> > .users.email           13     32   64 KiB    1.61k    72 MiB      0
> 690
> > GiB
> > fs1_meta               14     64  467 MiB   35.39k   829 MiB   0.04
> 690
> > GiB
> >
> > But on all our SSDs we can see high DATA usage, for example:
> > ID    CLASS  WEIGHT       REWEIGHT  SIZE     RAW USE  DATA    OMAP
>  META
> >     AVAIL    %USE   VAR   PGS  STATUS     TYPE NAME
> > 57    ssd      0.00005   1.00000  447 GiB  406 GiB  309 GiB   96 GiB
>  1.4
> > GiB   40 GiB  90.96  1.37  138         up                  osd.57
> > 19    ssd      0.00005   1.00000  447 GiB  402 GiB  309 GiB   92 GiB
>  1.4
> > GiB   44 GiB  90.09  1.35  145         up                  osd.19
> > 10    ssd      0.00005   1.00000  447 GiB  406 GiB  309 GiB   97 GiB
>  1.1
> > GiB   40 GiB  91.01  1.37  137         up                  osd.10
> > 12    ssd      0.00005   1.00000  447 GiB  401 GiB  309 GiB   91 GiB
>  1.3
> > GiB   46 GiB  89.77  1.35  134         up                  osd.12
> >
> > All SSDs are bluestore.
> > Ceph 16.3.11 and many SSDs were deployed on 16.x.
> > As I understand, the majority of these pools contain only "OMAP" which is
> > stored in RocksDB and tracked in stats as *OMAP*.
> >
> > And I don't understand why we have such high *DATA* and we can see that
> > DATA usage grows by 1-2 GB/day for the last 90 days (we don't have more
> > monitoring data)!
> >
> > I have checked crush rules and all SSDs for PGs from other pools and
> can't
> > see any other PGs from "pure" data pools. Rules are correct.
> >
> > How can I see what is stored in bluestore DATA on some OSD? I can export
> > full RocksDB, but not DATA stored in bluestore..
> > What kind of DATA can be stored in these OSDs with these pools?
> > We did a lot of offline compactions, 2-5 times for some SSDs in the last
> 90
> > days. But i'm not sure if it's related
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux