Re: Understanding reshard issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Am 13.12.17 um 20:50 schrieb Graham Allan:
After our Jewel to Luminous 12.2.2 upgrade, I ran into some of the same issues reported earlier on the list under "rgw resharding operation seemingly won't end".

Yes, that were/are my threads, I also have this issue.


I was able to correct the buckets using "radosgw-admin bucket check --fix" command, and later disabled the auto resharding.

Were you able to manually reshard a bucket after the "--fix"? Here, after a bucket was damaged once, the manual reshard process will freeze.

As an experiment, I selected an unsharded bucket to attempt a manual reshard. I added it the reshard list ,then ran "radosgw-admin reshard execute". The bucket in question contains 184000 objects and was being converted from 1 to 3 shards.

I'm trying to understand what I found...

1) the "radosgw-admin reshard execute" never returned. Somehow I expected it to kick off a background operation, but possibly this was mistaken.

Yes, same behaviour here. Someone on the list mentioned that resharding should actually happen quite fast (at most a few minutes).

So there's clearly something wrong here, and I am glad I am not the only one experiencing it.

To compare: What is your infrastructure? mine is:

* three beefy hosts (64GB RAM) with 4 OSDs each for data (HDD), and 2 OSDs each on SSDs for the index.
* all bluestore (DB/WAL for the HDD OSDs also on SSD partitions)
* radosgw runs on each of these OSD hosts (as they are mostly idling, I see no cause for my poor performance in running the rados gateways on the OSD hosts)
* 3 separate monitor/mgr hosts
* OS is CentOS 7, running Ceph 12.2.2
* We use several buckets, all with Versioning enabled, for many (100k to 12M) rather small objects.

pool settings:
# ceph osd pool ls detail
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 174 lfor 0/172 flags hashpspool stripe_width 0 pool 2 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 842 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw pool 3 'default.rgw.control' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 843 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw pool 4 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 950 lfor 0/948 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw pool 5 'default.rgw.log' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 845 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw pool 6 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 846 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw pool 7 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 847 lfor 0/246 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw pool 8 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 849 flags hashpspool stripe_width 0 application rgw

Regards,

Martin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux