RGW Dynamic bucket index resharding keeps resharding all buckets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We're into some problems with dynamic bucket index resharding. After an upgrade from Ceph 12.2.2 to 12.2.5, which fixed an issue with the resharding when using tenants (which we do), the cluster was busy resharding for 2 days straight, resharding the same buckets over and over again.

After disabling it and re-enabling it a while later, it resharded all buckets again and then kept quiet for a bit. Later on it started resharding buckets over and over again, even buckets which didn't have any data added in the meantime. In the reshard list it always says 'old_num_shards: 1' for every bucket, even though I can confirm with 'bucket stats' there's already the desired amount of shards present. It looks like the background process which scans buckets doesn't properly recognize the amount of shards a bucket currently has. When I manually add a reshard job, it does properly recognize the current amount of shards.

On a sidenote, we had two buckets in the reshard list which were removed a long while ago. We were unable to cancel the reshard job for those buckets. After recreating the users and buckets we were able to remove them from the list though, so they are no longer present. Probably not relevant, but you never know.

Are we missing something, or are we running into a bug?

Thanks,

Sander
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux