Re: RGW dynamic resharding blocks write ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We had a quite small window yesterday to debug, I found the error messages but we didn't collect the logs yet, I will ask them to do that on Monday. I *think* the error was something like this:

resharding operation on bucket index detected, blocking block_while_resharding ERROR: bucket is still resharding, please retry

But I'll verify and ask them to collect the logs.

[1] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/4XMMPSHW7OQ3NU7IE4QFK6A2QVDQ2CJR/

Zitat von Casey Bodley <cbodley@xxxxxxxxxx>:

while a bucket is resharding, rgw will retry several times internally
to apply the write before returning an error to the client. while most
buckets can be resharded within seconds, very large buckets may hit
these timeouts. any other cause of slow osd ops could also have that
effect. it can be helpful to pre-shard very large buckets to avoid
these resharding delays

can you tell which error code was returned to the client there? it
should be a retryable error, and many http clients have retry logic to
prevent these errors from reaching the application

On Fri, Jul 7, 2023 at 6:35 AM Eugen Block <eblock@xxxxxx> wrote:

Hi *,
last week I successfully upgraded a customer cluster from Nautilus to
Pacific, no real issues, their main use is RGW. A couple of hours
after most of the OSDs were upgraded (the RGWs were not yet) their
application software reported an error, it couldn't write to a bucket.
This error occured again two days ago, in the RGW logs I found the
relevant messages that resharding was happening at that time. I'm
aware that this is nothing unusual, but I can't find anything helpful
how to prevent this except for deactivating dynamic resharding and
then manually do it during maintenance windows. We don't know yet if
there's really data missing after the bucket access has recovered or
not, that still needs to be investigated. Since Nautilus already had
dynamic resharding enabled, I wonder if they were just lucky until
now, for example resharding happened while no data was being written
to the buckets. Or if resharding just didn't happen until then, I have
no access to the cluster so I don't have any bucket stats available
right now. I found this thread [1] about an approach how to prevent
blocked IO but it's from 2019 and I don't know how far that got.

There are many users/operators on this list who use RGW more than me,
how do you deal with this? Are your clients better prepared for these
events? Any comments are appreciated!

Thanks,
Eugen

[1]
https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/NG56XXAM5A4JONT4BGPQAZUTJAYMOSZ2/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux