Re: how to handle rgw leaked data (aka data that is not available via buckets but eats diskspace)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony,

yes we are using replication, the lost space is calculated before it's
replicated.
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       1.1 PiB     191 TiB     968 TiB      968 TiB         83.55
    TOTAL     1.1 PiB     191 TiB     968 TiB      968 TiB         83.55

POOLS:
    POOL                                ID     PGS      STORED      OBJECTS
    USED        %USED     MAX AVAIL
    rbd                                  0       64         0 B           0
        0 B         0        13 TiB
    .rgw.root                            1       64      99 KiB         119
     99 KiB         0        13 TiB
    eu-central-1.rgw.control             2       64         0 B           8
        0 B         0        13 TiB
    eu-central-1.rgw.data.root           3       64     947 KiB       2.82k
    947 KiB         0        13 TiB
    eu-central-1.rgw.gc                  4       64     101 MiB         128
    101 MiB         0        13 TiB
    eu-central-1.rgw.log                 5       64     267 MiB         500
    267 MiB         0        13 TiB
    eu-central-1.rgw.users.uid           6       64     2.9 MiB       6.91k
    2.9 MiB         0        13 TiB
    eu-central-1.rgw.users.keys          7       64     263 KiB       6.73k
    263 KiB         0        13 TiB
    eu-central-1.rgw.meta                8       64     384 KiB          1k
    384 KiB         0        13 TiB
    eu-central-1.rgw.users.email         9       64        40 B           1
       40 B         0        13 TiB
    eu-central-1.rgw.buckets.index      10       64      10 GiB      67.28k
     10 GiB      0.03        13 TiB
    eu-central-1.rgw.buckets.data       11     2048     313 TiB     151.71M
    313 TiB     89.25        13 TiB
...

EC profile is pretty standard
[root@s3db16 ~]# ceph osd erasure-code-profile ls
default
[root@s3db16 ~]# ceph osd erasure-code-profile get default
k=2
m=1
plugin=jerasure
technique=reed_sol_van

We use mainly ceph 14.2.18. There is an OSD host with 14.2.19 and one with
14.2.20

Object populations is mixed, but the most amount of data is in huge files.
We store our platforms RBD snapshots in it.

Cheers
 Boris


Am Di., 27. Apr. 2021 um 06:49 Uhr schrieb Anthony D'Atri <
anthony.datri@xxxxxxxxx>:

> Are you using Replication?  EC? How many copies / which profile?
> On which Ceph release were your OSDs built?  BlueStore? Filestore?
> What is your RGW object population like?  Lots of small objects?  Mostly
> large objects?  Average / median object size?
>
> > On Apr 26, 2021, at 9:32 PM, Boris Behrens <bb@xxxxxxxxx> wrote:
> >
> > HI,
> >
> > we still have the problem that our rgw eats more diskspace than it
> should.
> > Summing up the "size_kb_actual" of all buckets show only half of the used
> > diskspace.
> >
> > There are 312TiB stored acording to "ceph df" but we only need around
> 158TB.
> >
> > I've already wrote to this ML with the problem, but there were no
> solutions
> > that would help.
> > I've doug through the ML archive and found some interesting threads
> > regarding orphan objects and these kind of issues.
> >
> > Did someone ever solved this problem?
> > Or do you just add more disk space.
> >
> > we tried to:
> > * use the "radosgw-admin orphan find/finish" tool (didn't work)
> > * manually triggering the GC (didn't work)
> >
> > currently running (since yesterday evening):
> > * rgw-orphan-list, which procused 270GB of text output, and it's not done
> > yet (I have 60GB diskspace left)
> >
> > --
> > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> > groüen Saal.
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux