Re: Not all Bucket Shards being used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Thank you for the information, Christian. When you reshard the bucket id is updated (with most recent versions of ceph, a generation number is incremented). The first bucket id matches the bucket marker, but after the first reshard they diverge.

This makes a lot of sense and explains why the large omap objects do
not go away. It is the old shards that are too big.

> The bucket id is in the names of the currently used bucket index shards. You’re searching for the marker, which means you’re finding older bucket index shards.
>
> Change your commands to these:
>
> # rados -p raum.rgw.buckets.index ls \
>    |grep 3caabb9a-4e3b-4b8a-8222-34c33dd63210.10648356.1 \
>    |sort -V
>
> # rados -p raum.rgw.buckets.index ls \
>    |grep 3caabb9a-4e3b-4b8a-8222-34c33dd63210.10648356.1 \
>    |sort -V \
>    |xargs -IOMAP sh -c \
>        'rados -p raum.rgw.buckets.index listomapkeys OMAP | wc -l'

I don't think the outputs are very interesting here. They are as expected:
- 131 lines of rados objects (omap)
- each omap contains about 70k keys (below the 100k limit).

> When you refer to the “second zone”, what do you mean? Is this cluster using multisite? If and only if your answer is “no”, then it’s safe to remove old bucket index shards. Depending on the version of ceph running when reshard was run, they were either intentionally left behind (earlier behavior) or removed automatically (later behavior).

Yes, this cluster uses multisite. It is one realm, one zonegroup with
two zones (bidirectional sync).
Ceph warns about resharding on the non-metadata zone. So I did not do
that and only resharded on the metadata zone.
The resharding was done using a radosgw-admin v16.2.6 on a ceph
cluster running v17.2.5.
Is there a way to get rid of the old (big) shards without breaking something?

Christian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux