Re: rgw multisite octopus - bucket can not be resharded after cancelling prior reshard process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,
resharding is not an issue, because we only sync the metadata. Like aws s3. 

But this looks very broken to me, does anyone got an idea how to fix that?

> Am 13.10.2022 um 11:58 schrieb Christian Rohmann <christian.rohmann@xxxxxxxxx>:
> 
> Hey Boris,
> 
>> On 07/10/2022 11:30, Boris Behrens wrote:
>> I just wanted to reshard a bucket but mistyped the amount of shards. In a
>> reflex I hit ctrl-c and waited. It looked like the resharding did not
>> finish so I canceled it, and now the bucket is in this state.
>> How can I fix it. It does not show up in the stale-instace list. It's also
>> a multisite environment (we only sync metadata).
> I believe resharding is not supported with rgw multisite (https://docs.ceph.com/en/latest/radosgw/dynamicresharding/#multisite)
> but is being worked on / implemented fpr the Quincy release, see https://tracker.ceph.com/projects/rgw/issues?query_id=247
> 
> But you are not syncing the data in your deployment? Maybe that's a different case then?
> 
> 
> 
> Regards
> 
> Christian
> 
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux