Re: Resolving LARGE_OMAP_OBJECTS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday, March 5th, 2021 at 15:20, Drew Weaver <drew.weaver@xxxxxxxxxx> wrote:
> Sorry to sound clueless but no matter what I search for on El Goog I can't figure out how to answer the question as to whether dynamic sharding is enabled in our environment.
>
> It's not configured as true in the config files, but it is the default.
>
> Is there a radosgw-admin command to determine whether or not it's enabled in the running environment?

If `rgw_dynamic_resharding` is not explicitly set to `false` in your environment, I think we can assume dynamic resharding is enabled. And if any of your buckets have more than one shard and you didn't reshard them manually, you'll know for sure dynamic resharding is working; you can check the number of shards on a bucket with `radosgw-admin bucket stats --bucket=<name>`, there's a `num_shards` field. You can also check with `radosgw-admin bucket limit check` if any of your buckets are about to be resharded.

Assuming dynamic resharding is enabled and none of your buckets are about to be resharded, I would then find out which object has too many OMAP keys by grepping the logs. The name of the object will contain the bucket ID (also found in the output of `radosgw-admin bucket stats`), so you'll know which bucket is causing the issue. And you can check how many OMAP keys are in each shard of that bucket index using

```
for obj in $(rados -p default.rgw.buckets.index ls | grep eaf0ece5-9f4a-4aa8-9d67-8c6698f7919b.88726492.4); do
  printf "%-60s %7d\n" $obj $(rados -p default.rgw.buckets.index listomapkeys $obj | wc -l)
done
```

(where `eaf0ece5-9f4a-4aa8-9d67-8c6698f7919b.88726492.4` is your bucket ID). If the number of keys are very uneven amongst the shards, there's probably an issue that needs to be addressed. If you they are relatively even but slightly above the warning threshold, it's probably a versioned bucket, and it should be safe to simply increase the threshold.

Cheers,

--
Ben
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux