Re: RGW dynamic resharding blocks write ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again, I got the log excerpt with the rgw error message:

s3:put_obj block_while_resharding ERROR: bucket is still resharding, please retry

Below is the message in context, I don't see a return code though, only 206 for the get requests. Unfortunately, we only have a recorded putty session so the information is limited. Does it help anyway to get to the bottom of this? One more thing, we noticed that rgw_reshard_bucket_lock_duration was changed somewhere in Nautilus from 120 to 360 seconds. They haven't reported these errors before the upgrade, so I feel like either they were lucky and simply didn't get into this resharding while trying to write or the lock_duration was actually only 120 seconds which may have been okay for the application. It's all just guessing at the moment, we don't have config dumps of all configs in place from before the upgrade, at least not that I'm aware of. Anyway, I still need to discuss with them if disabling dynamic resharding is the way to go (and then manually reshard during maintenance) and preshard new buckets if they can tell how many objects are expected in the new bucket. The error message repeats for around 3 and a half minutes, apparently this is the time it took to reshard the bucket. Maybe reducing the lock_duration to 120 seconds could also help here, but I wonder what the consequence would be. Would it stop resharding after 2 minutes and leave something orphaned behind or how is that lock_duration impacting the process exactly? One more question, I see these INFO messages "found lock on <BUCKET>", but the error message "bucket is still resharding" doesn't contain the bucket name. Because the INFO message I saw a lot, not only during the application timeout errors, so they don't seem related. How can I tell which bucket is throwing the error during resharding?

Thanks,
Eugen

709+0000 7f23a77be700 1 beast: 0x7f24dc1d85d0: <IP> - ICAS_nondicom [06/Jul/2023:00:06:58.613 +0000] "GET /shaprod-lts/20221114193114-20521114-77bfd> 305+0000 7f239dfab700 0 req 17231063235781096603 91.367729187s s3:put_obj block_while_resharding ERROR: bucket is still resharding, please retry 313+0000 7f23246b8700 0 req 13860563368404093374 91.383728027s s3:put_obj block_while_resharding ERROR: bucket is still resharding, please retry 313+0000 7f2382f75700 0 req 17231063235781096603 91.375732422s s3:put_obj NOTICE: resharding operation on bucket index detected, blocking 313+0000 7f231669c700 0 req 13860563368404093374 91.383728027s s3:put_obj NOTICE: resharding operation on bucket index detected, blocking 365+0000 7f23a0fb1700 0 INFO: RGWReshardLock::lock found lock on jivex-002-p2s3:d2c448cb-4f31-4f28-ac93-3941982d2f46.284023468.1 to be held by another RGW p> 365+0000 7f22fe66c700 0 INFO: RGWReshardLock::lock found lock on jivex-002-p2s3:d2c448cb-4f31-4f28-ac93-3941982d2f46.284023468.1 to be held by another RGW p> 365+0000 7f237c768700 0 INFO: RGWReshardLock::lock found lock on jivex-002-p2s3:d2c448cb-4f31-4f28-ac93-3941982d2f46.284023468.1 to be held by another RGW p> 365+0000 7f2361732700 0 INFO: RGWReshardLock::lock found lock on jivex-002-p2s3:d2c448cb-4f31-4f28-ac93-3941982d2f46.284023468.1 to be held by another RGW p> 409+0000 7f231669c700 0 INFO: RGWReshardLock::lock found lock on jivex-002-p2s3:d2c448cb-4f31-4f28-ac93-3941982d2f46.284023468.1 to be held by another RGW p> 409+0000 7f2382f75700 0 INFO: RGWReshardLock::lock found lock on jivex-002-p2s3:d2c448cb-4f31-4f28-ac93-3941982d2f46.284023468.1 to be held by another RGW p> 669+0000 7f22e3e37700 0 req 18215535743838894575 91.735725403s s3:put_obj block_while_resharding ERROR: bucket is still resharding, please retry 809+0000 7f2326ebd700 0 req 18215535743838894575 91.875732422s s3:put_obj NOTICE: resharding operation on bucket index detected, blocking




Zitat von Eugen Block <eblock@xxxxxx>:

We had a quite small window yesterday to debug, I found the error messages but we didn't collect the logs yet, I will ask them to do that on Monday. I *think* the error was something like this:

resharding operation on bucket index detected, blocking block_while_resharding ERROR: bucket is still resharding, please retry

But I'll verify and ask them to collect the logs.

[1] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/4XMMPSHW7OQ3NU7IE4QFK6A2QVDQ2CJR/

Zitat von Casey Bodley <cbodley@xxxxxxxxxx>:

while a bucket is resharding, rgw will retry several times internally
to apply the write before returning an error to the client. while most
buckets can be resharded within seconds, very large buckets may hit
these timeouts. any other cause of slow osd ops could also have that
effect. it can be helpful to pre-shard very large buckets to avoid
these resharding delays

can you tell which error code was returned to the client there? it
should be a retryable error, and many http clients have retry logic to
prevent these errors from reaching the application

On Fri, Jul 7, 2023 at 6:35 AM Eugen Block <eblock@xxxxxx> wrote:

Hi *,
last week I successfully upgraded a customer cluster from Nautilus to
Pacific, no real issues, their main use is RGW. A couple of hours
after most of the OSDs were upgraded (the RGWs were not yet) their
application software reported an error, it couldn't write to a bucket.
This error occured again two days ago, in the RGW logs I found the
relevant messages that resharding was happening at that time. I'm
aware that this is nothing unusual, but I can't find anything helpful
how to prevent this except for deactivating dynamic resharding and
then manually do it during maintenance windows. We don't know yet if
there's really data missing after the bucket access has recovered or
not, that still needs to be investigated. Since Nautilus already had
dynamic resharding enabled, I wonder if they were just lucky until
now, for example resharding happened while no data was being written
to the buckets. Or if resharding just didn't happen until then, I have
no access to the cluster so I don't have any bucket stats available
right now. I found this thread [1] about an approach how to prevent
blocked IO but it's from 2019 and I don't know how far that got.

There are many users/operators on this list who use RGW more than me,
how do you deal with this? Are your clients better prepared for these
events? Any comments are appreciated!

Thanks,
Eugen

[1]
https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/NG56XXAM5A4JONT4BGPQAZUTJAYMOSZ2/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux