Re: Large omap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Have you looked at omaps keys to see what's listed there?

In our configuration, the radosgw garbage collector uses the
*default.rgw.logs* pool for garbage collection (radosgw-admin zone get
default | jq .gc_pool).

I've seen large omaps in my *default.rgw.logs* pool before when I've
deleted large amounts of s3 data and there are many shadow objects that
still need to be deleted by the garbage collector.

Cheers,
Tom

On Wed, May 20, 2020 at 9:47 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

> Den ons 20 maj 2020 kl 05:23 skrev Szabo, Istvan (Agoda) <
> Istvan.Szabo@xxxxxxxxx>:
>
> > LARGE_OMAP_OBJECTS 1 large omap objects
> >     1 large objects found in pool 'default.rgw.log'
> > When I look for this large omap object, this is the one:
> > for i in `ceph pg ls-by-pool default.rgw.log | tail -n +2 | awk '{print
> > $1}'`; do echo -n "$i: "; ceph pg $i query |grep num_large_omap_objects |
> > head -1 | awk '{print $2}'; done | grep ": 1"
> > 4.d: 1
> > I found only this way to reduce the size:
> > radosgw-admin usage trim --end-date=2019-05-01 --yes-i-really-mean-it
> >
> > However when this is running the RGW became completely unreachable, the
> > loadbalancer started to flapping and users started to complain because
> they
> > can't do anything.
> > Is there any other way to fix it, or any suggestion why this issue
> happens?
> >
>
> If you are not using the usage logs for anything, there are options in rgw
> to not produce them, which is a blunt but working solution to not have to
> clean them out with outages during trimming.
>
> If you do use them, perhaps set "rgw usage max user shards" to something
> larger than the default 1.
>
> --
> May the most significant bit of your life be positive.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Thomas Bennett

Storage Engineer at SARAO
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux