Re: Large OMAP Objects in zone.rgw.log pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What ceph version is this cluster running? Luminous or later should not be writing any new meta.log entries when it detects a single-zone configuration.

I'd recommend editing your zonegroup configuration (via 'radosgw-admin zonegroup get' and 'put') to set both log_meta and log_data to false, then commit the change with 'radosgw-admin period update --commit'.

You can then delete any meta.log.* and data_log.* objects from your log pool using the rados tool.

On 7/25/19 2:30 PM, Brett Chancellor wrote:
Casey,
  These clusters were setup with the intention of one day doing multi site replication. That has never happened. The cluster has a single realm, which contains a single zonegroup, and that zonegroup contains a single zone.

-Brett

On Thu, Jul 25, 2019 at 2:16 PM Casey Bodley <cbodley@xxxxxxxxxx <mailto:cbodley@xxxxxxxxxx>> wrote:

    Hi Brett,

    These meta.log objects store the replication logs for metadata
    sync in
    multisite. Log entries are trimmed automatically once all other zones
    have processed them. Can you verify that all zones in the multisite
    configuration are reachable and syncing? Does 'radosgw-admin sync
    status' on any zone show that it's stuck behind on metadata sync?
    That
    would prevent these logs from being trimmed and result in these large
    omap warnings.

    On 7/25/19 1:59 PM, Brett Chancellor wrote:
    > I'm having an issue similar to
    >
    http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html ;.

    > I don't see where any solution was proposed.
    >
    > $ ceph health detail
    > HEALTH_WARN 1 large omap objects
    > LARGE_OMAP_OBJECTS 1 large omap objects
    >     1 large objects found in pool 'us-prd-1.rgw.log'
    >     Search the cluster log for 'Large omap object found' for
    more details.
    >
    > $ grep "Large omap object" /var/log/ceph/ceph.log
    > 2019-07-25 14:58:21.758321 osd.3 (osd.3) 15 : cluster [WRN]
    Large omap
    > object found. Object:
    > 51:61eb35fe:::meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19:head
    > Key count: 3382154 Size (bytes): 611384043
    >
    > $ rados -p us-prd-1.rgw.log listomapkeys
    > meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19 |wc -l
    > 3382154
    >
    > $ rados -p us-prd-1.rgw.log listomapvals
    > meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19
    > This returns entries from almost every bucket, across multiple
    > tenants. Several of the entries are from buckets that no longer
    exist
    > on the system.
    >
    > $ ceph df |egrep 'OBJECTS|.rgw.log'
    >     POOL        ID      STORED      OBJECTS     USED    %USED MAX
    > AVAIL
    >     us-prd-1.rgw.log                 51     758 MiB 228   758 MiB
    >       0       102 TiB
    >
    > Thanks,
    >
    > -Brett
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux