Re: HEALTH_WARN due to large omap object wont clear even after trim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Casey. I will issue a scrub for the pg that contains this
object to speed things along. Will report back when that's done.

On Fri, Sep 20, 2019 at 2:50 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
>
> Hi Jared,
>
> My understanding is that these 'large omap object' warnings are only
> issued or cleared during scrub, so I'd expect them to go away the next
> time the usage objects get scrubbed.
>
> On 9/20/19 2:31 PM, shubjero wrote:
> > Still trying to solve this one.
> >
> > Here is the corresponding log entry when the large omap object was found:
> >
> > ceph-osd.1284.log.2.gz:2019-09-18 11:43:39.237 7fcd68f96700  0
> > log_channel(cluster) log [WRN] : Large omap object found. Object:
> > 26:86e4c833:::usage.22:head Key count: 2009548 Size (bytes): 369641376
> >
> > I have since trimmed the entire usage log and disabled it entirely.
> > You can see from the output below that there's nothing in these usage
> > log objects.
> >
> > for i in `rados -p .usage ls`; do echo $i; rados -p .usage
> > listomapkeys $i | wc -l; done
> > usage.29
> > 0
> > usage.12
> > 0
> > usage.1
> > 0
> > usage.26
> > 0
> > usage.20
> > 0
> > usage.24
> > 0
> > usage.16
> > 0
> > usage.15
> > 0
> > usage.3
> > 0
> > usage.19
> > 0
> > usage.23
> > 0
> > usage.5
> > 0
> > usage.11
> > 0
> > usage.7
> > 0
> > usage.30
> > 0
> > usage.18
> > 0
> > usage.21
> > 0
> > usage.27
> > 0
> > usage.13
> > 0
> > usage.22
> > 0
> > usage.25
> > 0
> > .
> > 4
> > usage.10
> > 0
> > usage.8
> > 0
> > usage.9
> > 0
> > usage.28
> > 0
> > usage.2
> > 0
> > usage.4
> > 0
> > usage.6
> > 0
> > usage.31
> > 0
> > usage.17
> > 0
> >
> >
> > root@infra:~# rados -p .usage listomapkeys usage.22
> > root@infra:~#
> >
> >
> > On Thu, Sep 19, 2019 at 12:54 PM Charles Alva <charlesalva@xxxxxxxxx> wrote:
> >> Could you please share how you trimmed the usage log?
> >>
> >> Kind regards,
> >>
> >> Charles Alva
> >> Sent from Gmail Mobile
> >>
> >>
> >> On Thu, Sep 19, 2019 at 11:46 PM shubjero <shubjero@xxxxxxxxx> wrote:
> >>> Hey all,
> >>>
> >>> Yesterday our cluster went in to HEALTH_WARN due to 1 large omap
> >>> object in the .usage pool (I've posted about this in the past). Last
> >>> time we resolved the issue by trimming the usage log below the alert
> >>> threshold but this time it seems like the alert wont clear even after
> >>> trimming and (this time) disabling the usage log entirely.
> >>>
> >>> ceph health detail
> >>> HEALTH_WARN 1 large omap objects
> >>> LARGE_OMAP_OBJECTS 1 large omap objects
> >>>      1 large objects found in pool '.usage'
> >>>      Search the cluster log for 'Large omap object found' for more details.
> >>>
> >>> I've bounced ceph-mon, ceph-mgr, radosgw and even issued osd scrub on
> >>> the two osd's that hold pg's for the .usage pool but the alert wont
> >>> clear.
> >>>
> >>> It's been over 24 hours since I trimmed the usage log.
> >>>
> >>> Any suggestions?
> >>>
> >>> Jared Baker
> >>> Cloud Architect, OICR
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux