This prior post https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/2QNKWK642LWCNCJEB5THFGMSLR37FLX7/ may help. You can bump up the warning threshold to make the warning go away - a few releases ago it was reduced to 1/10 of the prior value. There’s also information about trimming usage logs and for removing specific usage log objects. > On Oct 27, 2022, at 4:05 AM, Sarah Coxon <sazzle2611@xxxxxxxxx> wrote: > > Hey, I would really appreciate any help I can get on this as googling has > led me to a dead end. > > We have 2 data centers each with 4 servers running ceph on kubernetes in > multisite config, everything is working great but recently the master > cluster changed status to HEALTH_WARN and the issues are large omap objects > in the .rgw.log pool. Second cluster is still HEALTH_OK > > Viewing the sync error log from master shows a lot of very ancient logs > related to a bucket that has since been deleted. > > Is there any way to clear this log? > > bash-4.4$ radosgw-admin sync error list | wc -l > 352162 > > I believe, although I'm not sure that this is a massive part of the data > stored in the .rgw.log pool, I haven't been able to find any info on this > except for several other posts about clearing the error log but none of > them had a resolution. > > I am tempted to increase the PG's for this pool from 16 to 32 to see if it > helps but holding off because that is not an ideal solution just to get rid > of this warning when all I want is to get rid of the errors related to a > bucket that no longer exists. > > Thanks to anyone that can offer advice! > > Sarah > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx