Hi all, We have 2 Ceph clusters in multisite configuration, both are working fine (syncing correctly) but 1 of them is showing warning 32 large omap objects in the log pool. This seems to be coming from the sync error list for i in `rados -p wilxite.rgw.log ls`; do echo -n "$i:"; rados -p wilxite.rgw.log listomapkeys $i | wc -l; done > /tmp/omapkeys sort -t: -k2 -r -n /tmp/omapkeys | head -1 sync.error-log.1:474102 This command 'radosgw-admin sync error list' shows a lot of very old entries, none for 2022, how can I get rid of them? The trim command doesn't do anything and both sites are showing data and metadata as up to date. I have run a resync on the metadata and also data sync on a couple of buckets mentioned in the error list but it's made no difference. Would really appreciate any help as I have spent hours and hours trawling through everything online and can't find any info on how I can clear the error log. Best Regards, Sarah _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx