Re: Understanding reshard issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/14/2017 04:00 AM, Martin Emrich wrote:
Hi!

Am 13.12.17 um 20:50 schrieb Graham Allan:
After our Jewel to Luminous 12.2.2 upgrade, I ran into some of the same issues reported earlier on the list under "rgw resharding operation seemingly won't end".

Yes, that were/are my threads, I also have this issue.


I was able to correct the buckets using "radosgw-admin bucket check --fix" command, and later disabled the auto resharding.

Were you able to manually reshard a bucket after the "--fix"? Here, after a bucket was damaged once, the manual reshard process will freeze.

Interesting... the test bucket I tried to reshard below was one that had previously needed "bucket check --fix".

I just tried the same thing on another old (and small, ~100 object) bucket which had not previously seen problems - I got the same hang.

Although, I was doing a "reshard add" and "reshard execute" on the bucket which I guess is more of a manually triggered automatic reshard, as opposed to a true manual "bucket reshard" command. Having said that, the manual "bucket reshard" command also now freezes on that bucket.

As an experiment, I selected an unsharded bucket to attempt a manual reshard. I added it the reshard list ,then ran "radosgw-admin reshard execute". The bucket in question contains 184000 objects and was being converted from 1 to 3 shards.

I'm trying to understand what I found...

1) the "radosgw-admin reshard execute" never returned. Somehow I expected it to kick off a background operation, but possibly this was mistaken.

Yes, same behaviour here. Someone on the list mentioned that resharding should actually happen quite fast (at most a few minutes).

So there's clearly something wrong here, and I am glad I am not the only one experiencing it.

To compare: What is your infrastructure? mine is:

* three beefy hosts (64GB RAM) with 4 OSDs each for data (HDD), and 2 OSDs each on SSDs for the index.
* all bluestore (DB/WAL for the HDD OSDs also on SSD partitions)
* radosgw runs on each of these OSD hosts (as they are mostly idling, I see no cause for my poor performance in running the rados gateways on the OSD hosts)
* 3 separate monitor/mgr hosts
* OS is CentOS 7, running Ceph 12.2.2
* We use several buckets, all with Versioning enabled, for many (100k to 12M) rather small objects.

This cluster has been around for some time (since firefly), and is running ubuntu 14.04. I will be converting it to Centos 7 over the next few weeks or months. It's only used for object store, no rbd or cephfs.

3 dedicated mons
9 large osd nodes with ~60x 6TB osds each, plus a handful of SSDs
4 radosgw nodes (2 ubuntu, 2 centos 7)

The radosgw main storage pools are ec42 filestore spinning drives, the indexes are on 3-way replicated filestore ssds.

--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx

--
Graham Allan
Minnesota Supercomputing Institute - gta@xxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux