Hello all,
After a recent Luminous upgrade (now running 12.2.8 with all OSDs
migrated to bluestore, upgraded from 11.2.0 and running filestore) I am
currently experiencing the warning 'large omap objects'.
I know this is related to large buckets in radosgw, and luminous
supports 'dynamic sharding' - however I feel that something is missing
from our configuration and i'm a bit confused on what the right approach
is to fix it.
First a bit of background info:
We previously had a multi site radosgw installation, however recently we
decommissioned the second site. With the radosgw multi-site
configuration we had 'bucket_index_max_shards = 0'. Since
decommissioning the second site, I have removed the secondary zonegroup
and changed 'bucket_index_max_shards' to be 16 for the single primary zone.
All our buckets do not have a 'num_shards' field when running
'radosgw-admin bucket stats --bucket <bucketname>'
Is this normal ?
Also - I'm finding it difficult to find out exactly what to do with the
buckets that are affected with 'large omap' (see commands below).
My interpretation of 'search the cluster log' is also listed below.
What do I need to do to with the below buckets get back to an overall
ceph HEALTH OK state ? :)
# ceph health detail
HEALTH_WARN 2 large omap objects
2 large objects found in pool '.bbp-gva-master.rgw.buckets.index'
Search the cluster log for 'Large omap object found' for more details.
# ceph osd pool get .bbp-gva-master.rgw.buckets.index pg_num
pg_num: 64
# for i in `ceph pg ls-by-pool .bbp-gva-master.rgw.buckets.index | tail
-n +2 | awk '{print $1}'`; do echo -n "$i: "; ceph pg $i query |grep
num_large_omap_objects | head -1 | awk '{print $2}'; done | grep ": 1"
137.1b: 1
137.36: 1
# cat buckets
#!/bin/bash
buckets=`radosgw-admin metadata list bucket |grep \" | cut -d\" -f2`
for i in $buckets
do
id=`radosgw-admin bucket stats --bucket $i |grep \"id\" | cut -d\" -f4`
pg=`ceph osd map .bbp-gva-master.rgw.buckets.index ${id} | awk
'{print $11}' | cut -d\( -f2 | cut -d\) -f1`
echo "$i:$id:$pg"
done
# ./buckets > pglist
# egrep '137.1b|137.36' pglist |wc -l
192
The following doesn't appear to do change anything
# for bucket in `cut -d: -f1 pglist`; do radosgw-admin reshard add
--bucket $bucket --num-shards 8; done
# radosgw-admin reshard process
--
Kind regards,
Ben Morrice
______________________________________________________________________
Ben Morrice | e: ben.morrice@xxxxxxx | t: +41-21-693-9670
EPFL / BBP
Biotech Campus
Chemin des Mines 9
1202 Geneva
Switzerland
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com