Re: Large OMAP Objects in default.rgw.log pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does anyone have  this issue before? As research, many people have issue with rgw.index which related to small small number of index sharding (too many objects per index).
I also check on this thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html but don't found any clues because number of data objects is below 100k per index and size of objects in rgw.log is 0.

Thanks.

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of mr. non non <arnondhc@xxxxxxxxxxx>
Sent: Monday, May 20, 2019 7:32 PM
To: EDH - Manuel Rios Fernandez; ceph-users@xxxxxxxxxxxxxx
Subject: Re: Large OMAP Objects in default.rgw.log pool
 
Hi Manuel,

I use version 12.2.8 with bluestore and also use manually index sharding (configured to 100).  As I checked, no buckets reach 100k of objects_per_shard.
here are health status and cluster log

# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
    1 large objects found in pool 'default.rgw.log'
    Search the cluster log for 'Large omap object found' for more details.

# cat ceph.log | tail -2
2019-05-19 17:49:36.306481 mon.MONNODE1 mon.0 10.118.191.231:6789/0 528758 : cluster [WRN] Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2019-05-19 17:49:34.535543 osd.38 osd.38 MONNODE1_IP:6808/3514427 12 : cluster [WRN] Large omap object found. Object: 4:b172cd59:usage::usage.26:head Key count: 8720830 Size (bytes): 1647024346 

All objects size are 0.
$  for i in `rados ls -p default.rgw.log`; do rados stat -p default.rgw.log ${i};done  | more
default.rgw.log/obj_delete_at_hint.0000000078 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/meta.history mtime 2019-05-20 19:19:40.000000, size 50
default.rgw.log/obj_delete_at_hint.0000000070 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000104 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000026 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000028 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000040 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000015 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000069 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000095 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000003 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000047 mtime 2019-05-20 19:31:45.000000, size 0
default.rgw.log/obj_delete_at_hint.0000000035 mtime 2019-05-20 19:31:45.000000, size 0


Please kindly advise how to remove health_warn message.

Many thanks.
Arnondh


From: EDH - Manuel Rios Fernandez <mriosfer@xxxxxxxxxxxxxxxx>
Sent: Monday, May 20, 2019 5:41 PM
To: 'mr. non non'; ceph-users@xxxxxxxxxxxxxx
Subject: RE: Large OMAP Objects in default.rgw.log pool
 

Hi Arnondh,

 

Whats your ceph version?

 

Regards

 

 

De: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> En nombre de mr. non non
Enviado el: lunes, 20 de mayo de 2019 12:39
Para: ceph-users@xxxxxxxxxxxxxx
Asunto: Large OMAP Objects in default.rgw.log pool

 

Hi,

 

I found the same issue like above. 

Does anyone know how to fix it?

 

Thanks.

Arnondh

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux