Re: Large omap objects - how to fix ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Sorry to hijack this thread. I have a similar issue also with 12.2.8
recently upgraded from Jewel.

I my case all buckets are within limits:
    # radosgw-admin bucket limit check | jq '.[].buckets[].fill_status' | uniq
    "OK"

    # radosgw-admin bucket limit check | jq
'.[].buckets[].objects_per_shard'  | sort -n | uniq
    0
    1
    30
    109
    516
    5174
    50081
    50088
    50285
    50323
    50336
    51826

rgw_max_objs_per_shard is set to the default of 100k

---
Alex Cucu

On Fri, Oct 26, 2018 at 4:09 PM Ben Morrice <ben.morrice@xxxxxxx> wrote:
>
> Hello all,
>
> After a recent Luminous upgrade (now running 12.2.8 with all OSDs
> migrated to bluestore, upgraded from 11.2.0 and running filestore) I am
> currently experiencing the warning 'large omap objects'.
> I know this is related to large buckets in radosgw, and luminous
> supports 'dynamic sharding' - however I feel that something is missing
> from our configuration and i'm a bit confused on what the right approach
> is to fix it.
>
> First a bit of background info:
>
> We previously had a multi site radosgw installation, however recently we
> decommissioned the second site. With the radosgw multi-site
> configuration we had 'bucket_index_max_shards = 0'. Since
> decommissioning the second site, I have removed the secondary zonegroup
> and changed 'bucket_index_max_shards' to be 16 for the single primary zone.
> All our buckets do not have a 'num_shards' field when running
> 'radosgw-admin bucket stats --bucket <bucketname>'
> Is this normal ?
>
> Also - I'm finding it difficult to find out exactly what to do with the
> buckets that are affected with 'large omap' (see commands below).
> My interpretation of 'search the cluster log' is also listed below.
>
> What do I need to do to with the below buckets get back to an overall
> ceph HEALTH OK state ? :)
>
>
> # ceph health detail
> HEALTH_WARN 2 large omap objects
> 2 large objects found in pool '.bbp-gva-master.rgw.buckets.index'
> Search the cluster log for 'Large omap object found' for more details.
>
> # ceph osd pool get .bbp-gva-master.rgw.buckets.index pg_num
> pg_num: 64
>
> # for i in `ceph pg ls-by-pool .bbp-gva-master.rgw.buckets.index | tail
> -n +2 | awk '{print $1}'`; do echo -n "$i: "; ceph pg $i query |grep
> num_large_omap_objects | head -1 | awk '{print $2}'; done | grep ": 1"
> 137.1b: 1
> 137.36: 1
>
> # cat buckets
> #!/bin/bash
> buckets=`radosgw-admin metadata list bucket |grep \" | cut -d\" -f2`
> for i in $buckets
> do
>    id=`radosgw-admin bucket stats --bucket $i |grep \"id\" | cut -d\" -f4`
>    pg=`ceph osd map .bbp-gva-master.rgw.buckets.index ${id} | awk
> '{print $11}' | cut -d\( -f2 | cut -d\) -f1`
>    echo "$i:$id:$pg"
> done
> # ./buckets > pglist
> # egrep '137.1b|137.36' pglist |wc -l
> 192
>
> The following doesn't appear to do change anything
>
> # for bucket in `cut -d: -f1 pglist`; do radosgw-admin reshard add
> --bucket $bucket --num-shards 8; done
>
> # radosgw-admin reshard process
>
>
>
> --
> Kind regards,
>
> Ben Morrice
>
> ______________________________________________________________________
> Ben Morrice | e: ben.morrice@xxxxxxx | t: +41-21-693-9670
> EPFL / BBP
> Biotech Campus
> Chemin des Mines 9
> 1202 Geneva
> Switzerland
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux