I'm having an issue similar to http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html . I don't see where any solution was proposed.
us-prd-1.rgw.log 51 758 MiB 228 758 MiB 0 102 TiB
$ ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'us-prd-1.rgw.log'
Search the cluster log for 'Large omap object found' for more details.
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'us-prd-1.rgw.log'
Search the cluster log for 'Large omap object found' for more details.
$ grep "Large omap object" /var/log/ceph/ceph.log
2019-07-25 14:58:21.758321 osd.3 (osd.3) 15 : cluster [WRN] Large omap object found. Object: 51:61eb35fe:::meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19:head Key count: 3382154 Size (bytes): 611384043$ rados -p us-prd-1.rgw.log listomapkeys meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19 |wc -l
3382154$ rados -p us-prd-1.rgw.log listomapvals meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19
This returns entries from almost every bucket, across multiple tenants. Several of the entries are from buckets that no longer exist on the system.
$ ceph df |egrep 'OBJECTS|.rgw.log'
POOL ID STORED OBJECTS USED %USED MAX AVAIL us-prd-1.rgw.log 51 758 MiB 228 758 MiB 0 102 TiB
Thanks,
-Brett
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com