RGW dynamic resharding blocks write ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi *,
last week I successfully upgraded a customer cluster from Nautilus to Pacific, no real issues, their main use is RGW. A couple of hours after most of the OSDs were upgraded (the RGWs were not yet) their application software reported an error, it couldn't write to a bucket. This error occured again two days ago, in the RGW logs I found the relevant messages that resharding was happening at that time. I'm aware that this is nothing unusual, but I can't find anything helpful how to prevent this except for deactivating dynamic resharding and then manually do it during maintenance windows. We don't know yet if there's really data missing after the bucket access has recovered or not, that still needs to be investigated. Since Nautilus already had dynamic resharding enabled, I wonder if they were just lucky until now, for example resharding happened while no data was being written to the buckets. Or if resharding just didn't happen until then, I have no access to the cluster so I don't have any bucket stats available right now. I found this thread [1] about an approach how to prevent blocked IO but it's from 2019 and I don't know how far that got.

There are many users/operators on this list who use RGW more than me, how do you deal with this? Are your clients better prepared for these events? Any comments are appreciated!

Thanks,
Eugen

[1] https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/NG56XXAM5A4JONT4BGPQAZUTJAYMOSZ2/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux