Re: Large omap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Yes it is, this is the output: "default.rgw.log:gc"


From: Thomas Bennett <thomas@xxxxxxxxx>
Sent: Wednesday, May 20, 2020 5:44 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Subject: Re:  Re: Large omap

Email received from outside the company. If in doubt don't click links nor open attachments!
________________________________
Hi,

Have you looked at omaps keys to see what's listed there?

In our configuration, the radosgw garbage collector uses the default.rgw.logs pool for garbage collection (radosgw-admin zone get default | jq .gc_pool).

I've seen large omaps in my default.rgw.logs pool before when I've deleted large amounts of s3 data and there are many shadow objects that still need to be deleted by the garbage collector.

Cheers,
Tom

On Wed, May 20, 2020 at 9:47 AM Janne Johansson <icepic.dz@xxxxxxxxx<mailto:icepic.dz@xxxxxxxxx>> wrote:
Den ons 20 maj 2020 kl 05:23 skrev Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx<mailto:Istvan.Szabo@xxxxxxxxx>>:

> LARGE_OMAP_OBJECTS 1 large omap objects
>     1 large objects found in pool 'default.rgw.log'
> When I look for this large omap object, this is the one:
> for i in `ceph pg ls-by-pool default.rgw.log | tail -n +2 | awk '{print
> $1}'`; do echo -n "$i: "; ceph pg $i query |grep num_large_omap_objects |
> head -1 | awk '{print $2}'; done | grep ": 1"
> 4.d: 1
> I found only this way to reduce the size:
> radosgw-admin usage trim --end-date=2019-05-01 --yes-i-really-mean-it
>
> However when this is running the RGW became completely unreachable, the
> loadbalancer started to flapping and users started to complain because they
> can't do anything.
> Is there any other way to fix it, or any suggestion why this issue happens?
>

If you are not using the usage logs for anything, there are options in rgw
to not produce them, which is a blunt but working solution to not have to
clean them out with outages during trimming.

If you do use them, perhaps set "rgw usage max user shards" to something
larger than the default 1.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>


--
Thomas Bennett

Storage Engineer at SARAO

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux