Re: large omap objects in the .rgw.log pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am experiencing the same problem with 'Large omap object' in Ceph, so I
wrote a script to find the large objects in the pool, count the number of
"omap" in each object, and compare it with the value set for
large_omap_object_key_threshold in the Ceph configuration.

https://gist.github.com/RaminNietzsche/0297e9163834c050234686f5b4acb1a4

On Sun, Oct 30, 2022 at 5:12 AM Sarah Coxon <sazzle2611@xxxxxxxxx> wrote:

> Hi Anthony,
>
> Thank you for getting back to me, the post you sent was helpful although
> the log in that post was usage log and there aren't the same options for
> the sync error log.
>
> I understand that I can get rid of the warning by increasing the
> threshold but I would really like to get rid of the old data if possible.
>
> The sync error log is full of errors from buckets that have since been
> deleted and the sync errors are over 2 years old. Trimming the sync error
> log does nothing.
>
> From everything I've read this is an issue with having a multisite
> configuration and removing buckets doesn't clear previous data properly
> https://www.spinics.net/lists/ceph-devel/msg45359.html
>
> I already knew this as in I have been manually deleting metadata and index
> data for buckets a while after clearing them of objects and deleting the
> bucket itself (but making note of the bucket id to use in the later
> commands)
>
> I have found this command to get object ids in the sync error log related
> to the deleted buckets
>
> radosgw-admin sync error list | jq  '.[0].entries[] |
> select(.name|test("^company-report-images.")) | .id'
>
> but I don't know how to get rid of them and I really don't want to screw up
> the whole setup by deleting the wrong thing.
>
> On Thu, Oct 27, 2022 at 9:52 AM Anthony D'Atri <anthony.datri@xxxxxxxxx>
> wrote:
>
> > This prior post
> >
> >
> >
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/2QNKWK642LWCNCJEB5THFGMSLR37FLX7/
> >
> > may help.  You can bump up the warning threshold to make the warning go
> > away - a few releases ago it was reduced to 1/10 of the prior value.
> >
> > There’s also information about trimming usage logs and for removing
> > specific usage log objects.
> >
> > > On Oct 27, 2022, at 4:05 AM, Sarah Coxon <sazzle2611@xxxxxxxxx> wrote:
> > >
> > > Hey, I would really appreciate any help I can get on this as googling
> has
> > > led me to a dead end.
> > >
> > > We have 2 data centers each with 4 servers running ceph on kubernetes
> in
> > > multisite config, everything is working great but recently the master
> > > cluster changed status to HEALTH_WARN and the issues are large omap
> > objects
> > > in the .rgw.log pool. Second cluster is still HEALTH_OK
> > >
> > > Viewing the sync error log from master shows a lot of very ancient logs
> > > related to a bucket that has since been deleted.
> > >
> > > Is there any way to clear this log?
> > >
> > > bash-4.4$ radosgw-admin sync error list | wc -l
> > > 352162
> > >
> > > I believe, although I'm not sure that this is a massive part of the
> data
> > > stored in the .rgw.log pool, I haven't been able to find any info on
> this
> > > except for several other posts about clearing the error log but none of
> > > them had a resolution.
> > >
> > > I am tempted to increase the PG's for this pool from 16 to 32 to see if
> > it
> > > helps but holding off because that is not an ideal solution just to get
> > rid
> > > of this warning when all I want is to get rid of the errors related to
> a
> > > bucket that no longer exists.
> > >
> > > Thanks to anyone that can offer advice!
> > >
> > > Sarah
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux