Paul; I upgraded the cluster in question from 14.2.2 to 14.2.4 just before this came up, so that makes sense. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International Inc. DHilsbos@xxxxxxxxxxxxxx www.PerformAir.com -----Original Message----- From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Paul Emmerich Sent: Friday, November 15, 2019 8:48 AM To: Wido den Hollander Cc: Ceph Users Subject: Re: Large OMAP Object Note that the size limit changed from 2M keys to 200k keys recently (14.2.3 or 14.2.2 or something), so that object is probably older and that's just the first deep scrub with the reduced limit that triggered the warning. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Fri, Nov 15, 2019 at 4:40 PM Wido den Hollander <wido@xxxxxxxx> wrote: > > > > On 11/15/19 4:35 PM, DHilsbos@xxxxxxxxxxxxxx wrote: > > All; > > > > Thank you for your help so far. I have found the log entries from when the object was found, but don't see a reference to the pool. > > > > Here the logs: > > 2019-11-14 03:10:16.508601 osd.1 (osd.1) 21 : cluster [DBG] 56.7 deep-scrub starts > > 2019-11-14 03:10:18.325881 osd.1 (osd.1) 22 : cluster [WRN] Large omap object found. Object: 56:f7d15b13:::.dir.f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1:head Key count: 380425 Size (bytes): 82896978 > > > > In this case it's in pool 56, check 'ceph df' to see which pool that is. > > To me this seems like a RGW bucket which index grew too big. > > Use: > > $ radosgw-admin bucket list > $ radosgw-admin metadata get bucket:<BUCKET> > > And match that UUID back to the bucket. > > Wido > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director – Information Technology > > Perform Air International Inc. > > DHilsbos@xxxxxxxxxxxxxx > > www.PerformAir.com > > > > > > > > -----Original Message----- > > From: Wido den Hollander [mailto:wido@xxxxxxxx] > > Sent: Friday, November 15, 2019 1:56 AM > > To: Dominic Hilsbos; ceph-users@xxxxxxxxxxxxxx > > Cc: Stephen Self > > Subject: Re: Large OMAP Object > > > > Did you check /var/log/ceph/ceph.log on one of the Monitors to see which > > pool and Object the large Object is in? > > > > Wido > > > > On 11/15/19 12:23 AM, DHilsbos@xxxxxxxxxxxxxx wrote: > >> All; > >> > >> We had a warning about a large OMAP object pop up in one of our clusters overnight. The cluster is configured for CephFS, but nothing mounts a CephFS, at this time. > >> > >> The cluster mostly uses RGW. I've checked the cluster log, the MON log, and the MGR log on one of the mons, with no useful references to the pool / pg where the large OMAP objects resides. > >> > >> Is my only option to find this large OMAP object to go through the OSD logs for the individual OSDs in the cluster? > >> > >> Thank you, > >> > >> Dominic L. Hilsbos, MBA > >> Director - Information Technology > >> Perform Air International Inc. > >> DHilsbos@xxxxxxxxxxxxxx > >> www.PerformAir.com > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@xxxxxxxxxxxxxx > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >> > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com