Hi Manuel,
I use version 12.2.8 with bluestore and also use manually index sharding (configured to 100). As I checked,
no buckets reach 100k of objects_per_shard.
here are health status and cluster log
# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'default.rgw.log'
Search the cluster log for 'Large omap object found' for more details.
# cat ceph.log | tail -2
2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 10.118.191.231:6789/0 528758 : cluster [WRN] Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 : cluster [WRN] Large omap object found. Object: 4:b172cd59:usage::usage.26:head Key count: 8720830 Size (bytes): 1647024346
All objects size are 0.
$ for i in `rados ls -p default.rgw.log`; do rados stat -p default.rgw.log ${i};done | more
default.rgw.log/obj_delete_at_hint.0000000078 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/meta.history mtime 2019-05-20 19:19:40.000000, size 50
default.rgw.log/obj_delete_at_hint.0000000070 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000104 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000026 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000028 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000040 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000015 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000069 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000095 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000003 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000047 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000035 mtime 2019-05-20 19:31:45.000000, size 0
Please kindly advise how to remove health_warn message.
Many thanks.
Arnondh
From: EDH - Manuel Rios Fernandez <mriosfer@xxxxxxxxxxxxxxxx>
Sent: Monday, May 20, 2019 5:41 PM To: 'mr. non non'; ceph-users@xxxxxxxxxxxxxx Subject: RE: Large OMAP Objects in default.rgw.log pool Hi Arnondh,
Whats your ceph version?
Regards
De: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> En nombre de mr. non non
Hi,
I found the same issue like above. Does anyone know how to fix it?
Thanks. Arnondh |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com