Re: RGW Pool uses way more space than it should be

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hendrik,

Which version of Ceph was this deployed on? Did you use the default
value for bluestore_min_alloc_size(_hdd)? If it's before Pacific and
you used the default, then the min alloc size is 64KiB for HDDs, which
could be causing quite a bit of usage inflation depending on the sizes
of objects involved.

Josh

On Fri, Apr 8, 2022 at 2:38 AM Hendrik Peyerl <hpeyerl@xxxxxxxxxxxx> wrote:
>
> Thank you for your quick investigation, the formatting in mails is not great, OMAP is only 2.2GiB, the 8.3 TiB is AVAIL
>
>
> > On 8. Apr 2022, at 10:18, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
> >
> > Den fre 8 apr. 2022 kl 10:06 skrev Hendrik Peyerl <hpeyerl@xxxxxxxxxxxx>:
> >>
> >> ID   CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME
> >> -1         48.00000         -   29 TiB   20 TiB   18 TiB  2.2 GiB  324 GiB  8.3 TiB  70.97  1.00    -          root default
> >
> >
> > I guess all the RGW and ceph-fs objects have a lot of metadata, making
> > your OMAPs eat 8.3TiB even if the actual contents of the data is far
> > less.
> >
> > --
> > May the most significant bit of your life be positive.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux