while a bucket is resharding, rgw will retry several times internally to apply the write before returning an error to the client. while most buckets can be resharded within seconds, very large buckets may hit these timeouts. any other cause of slow osd ops could also have that effect. it can be helpful to pre-shard very large buckets to avoid these resharding delays can you tell which error code was returned to the client there? it should be a retryable error, and many http clients have retry logic to prevent these errors from reaching the application On Fri, Jul 7, 2023 at 6:35 AM Eugen Block <eblock@xxxxxx> wrote: > > Hi *, > last week I successfully upgraded a customer cluster from Nautilus to > Pacific, no real issues, their main use is RGW. A couple of hours > after most of the OSDs were upgraded (the RGWs were not yet) their > application software reported an error, it couldn't write to a bucket. > This error occured again two days ago, in the RGW logs I found the > relevant messages that resharding was happening at that time. I'm > aware that this is nothing unusual, but I can't find anything helpful > how to prevent this except for deactivating dynamic resharding and > then manually do it during maintenance windows. We don't know yet if > there's really data missing after the bucket access has recovered or > not, that still needs to be investigated. Since Nautilus already had > dynamic resharding enabled, I wonder if they were just lucky until > now, for example resharding happened while no data was being written > to the buckets. Or if resharding just didn't happen until then, I have > no access to the cluster so I don't have any bucket stats available > right now. I found this thread [1] about an approach how to prevent > blocked IO but it's from 2019 and I don't know how far that got. > > There are many users/operators on this list who use RGW more than me, > how do you deal with this? Are your clients better prepared for these > events? Any comments are appreciated! > > Thanks, > Eugen > > [1] > https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/NG56XXAM5A4JONT4BGPQAZUTJAYMOSZ2/ > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx