Re: Large OMAP Object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's a warning, not an error, and if you consider it to not be a
problem, I believe you can change
osd_deep_scrub_large_omap_object_value_sum_threshold back to 2M.

On Wed, Nov 20, 2019 at 11:37 AM <DHilsbos@xxxxxxxxxxxxxx> wrote:
>
> All;
>
> Since I haven't heard otherwise, I have to assume that the only way to get this to go away is to dump the contents of the RGW bucket(s), and  recreate it (them)?
>
> How did this get past release approval?  A change which makes a valid cluster state in-valid, with no mitigation other than downtime, in a minor release.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of DHilsbos@xxxxxxxxxxxxxx
> Sent: Friday, November 15, 2019 9:13 AM
> To: ceph-users@xxxxxxxxxxxxxx
> Cc: Stephen Self
> Subject: Re:  Large OMAP Object
>
> Wido;
>
> Ok, yes, I have tracked it down to the index for one of our buckets.  I missed the ID in the ceph df output previously.  Next time I'll wait to read replies until I've finished my morning coffee.
>
> How would I go about correcting this?
>
> The content for this bucket is basically just junk, as we're still doing production qualification, and workflow planning.  Moving from Windows file shares to self-hosted cloud storage is a significant undertaking.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
>
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Wido den Hollander
> Sent: Friday, November 15, 2019 8:40 AM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Large OMAP Object
>
>
>
> On 11/15/19 4:35 PM, DHilsbos@xxxxxxxxxxxxxx wrote:
> > All;
> >
> > Thank you for your help so far.  I have found the log entries from when the object was found, but don't see a reference to the pool.
> >
> > Here the logs:
> > 2019-11-14 03:10:16.508601 osd.1 (osd.1) 21 : cluster [DBG] 56.7 deep-scrub starts
> > 2019-11-14 03:10:18.325881 osd.1 (osd.1) 22 : cluster [WRN] Large omap object found. Object: 56:f7d15b13:::.dir.f91aeff8-a365-47b4-a1c8-928cd66134e8.44130.1:head Key count: 380425 Size (bytes): 82896978
> >
>
> In this case it's in pool 56, check 'ceph df' to see which pool that is.
>
> To me this seems like a RGW bucket which index grew too big.
>
> Use:
>
> $ radosgw-admin bucket list
> $ radosgw-admin metadata get bucket:<BUCKET>
>
> And match that UUID back to the bucket.
>
> Wido
>
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director – Information Technology
> > Perform Air International Inc.
> > DHilsbos@xxxxxxxxxxxxxx
> > www.PerformAir.com
> >
> >
> >
> > -----Original Message-----
> > From: Wido den Hollander [mailto:wido@xxxxxxxx]
> > Sent: Friday, November 15, 2019 1:56 AM
> > To: Dominic Hilsbos; ceph-users@xxxxxxxxxxxxxx
> > Cc: Stephen Self
> > Subject: Re:  Large OMAP Object
> >
> > Did you check /var/log/ceph/ceph.log on one of the Monitors to see which
> > pool and Object the large Object is in?
> >
> > Wido
> >
> > On 11/15/19 12:23 AM, DHilsbos@xxxxxxxxxxxxxx wrote:
> >> All;
> >>
> >> We had a warning about a large OMAP object pop up in one of our clusters overnight.  The cluster is configured for CephFS, but nothing mounts a CephFS, at this time.
> >>
> >> The cluster mostly uses RGW.  I've checked the cluster log, the MON log, and the MGR log on one of the mons, with no useful references to the pool / pg where the large OMAP objects resides.
> >>
> >> Is my only option to find this large OMAP object to go through the OSD logs for the individual OSDs in the cluster?
> >>
> >> Thank you,
> >>
> >> Dominic L. Hilsbos, MBA
> >> Director - Information Technology
> >> Perform Air International Inc.
> >> DHilsbos@xxxxxxxxxxxxxx
> >> www.PerformAir.com
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux