Unable to cancel buckets from resharding queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are experiencing some issues with the bucket resharding queue in Ceph Mimic at one of our customers. I suspect that some of the issues are related to the upgrades of earlier versions of the cluster/radosgw.

1) When we cancel the resharding of a bucket, the bucket resharding entry is removed from the queue and almost immediately re-added. We confirm the removal by listing the omapkeys of all the reshard.0000## objects. The relevant omap key is temporarily removed. After a short time it is re-added. I haven't yet determined which process adds it back, but I can only think it is one of the two rgws.

2) We see a lot of objects,1265, in the reshard_pool: "default.rgw.log:reshard". Most look like they might be old to buckets markers or something like that. Most of these objects have not been touched in a long time (mtime 2018). 

Some numbers:

Total buckets: 569
Objects in index pool: 649
Objects in default.rgw.log:reshard namespace: 1265  (of which 16 are the ‘rgw_reshard_num_logs’ objects) these are size '0'
Buckets in reshard queue: 41

The objects in "default.rgw.log:reshard" have a similar naming schema to the index objects or , but I cannot relate them directly. Obviously there are loads more than the bucket index objects.

The pools used for RGW are for an older generation of Radosgw:
.rgw.root
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.users.uid
default.rgw.users.keys
default.rgw.buckets.index
default.rgw.buckets.data
default.rgw.users.email
default.rgw.buckets.non-ec
default.rgw.users.swift
default.rgw.usage

My two main questions are:
1) What process, other than dynamic resharding, could cause the re-adding of these buckets to the resharding queue?
2) Do more people see lots of objects in the reshard pool/namespace and can somebody help me understand what these object are?

If somebody can point me in the direction of some more documentation or a good talk regarding the resharding mechanism that would also be great.

Thanks and with kind regards,

Wout
42on
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux