Re: RGW dynamic resharding blocks write ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I do manual reshard if needed but try to do pre-shard in advance.

I try to deal with the user and ask them before onboard them, do they need bucket with more than a million objects (default 11 shard) or it’s enough.
If they need I preshard (to a prime numbered shard number), if not then stay with default 11.

Istvan Szabo
Staff Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------

On 2023. Jul 7., at 17:49, Eugen Block <eblock@xxxxxx> wrote:

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Okay, thanks for the comment. But does that mean that you never
reshard or do you manually reshard? Do you experience performance
degradation? Maybe I should also add that they have their index pool
on HDDs (with rocksdb on SSD), not sure how big the impact is during
resharding though.

Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:

I turned off :)

Istvan Szabo
Staff Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------

On 2023. Jul 7., at 17:35, Eugen Block <eblock@xxxxxx> wrote:

Email received from the internet. If in doubt, don't click any link
nor open any attachment !
________________________________

Hi *,
last week I successfully upgraded a customer cluster from Nautilus to
Pacific, no real issues, their main use is RGW. A couple of hours
after most of the OSDs were upgraded (the RGWs were not yet) their
application software reported an error, it couldn't write to a bucket.
This error occured again two days ago, in the RGW logs I found the
relevant messages that resharding was happening at that time. I'm
aware that this is nothing unusual, but I can't find anything helpful
how to prevent this except for deactivating dynamic resharding and
then manually do it during maintenance windows. We don't know yet if
there's really data missing after the bucket access has recovered or
not, that still needs to be investigated. Since Nautilus already had
dynamic resharding enabled, I wonder if they were just lucky until
now, for example resharding happened while no data was being written
to the buckets. Or if resharding just didn't happen until then, I have
no access to the cluster so I don't have any bucket stats available
right now. I found this thread [1] about an approach how to prevent
blocked IO but it's from 2019 and I don't know how far that got.

There are many users/operators on this list who use RGW more than me,
how do you deal with this? Are your clients better prepared for these
events? Any comments are appreciated!

Thanks,
Eugen

[1]
https://lists.ceph.io/hyperkitty/list/dev@xxxxxxx/thread/NG56XXAM5A4JONT4BGPQAZUTJAYMOSZ2/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by
copyright or other legal rules. If you have received it by mistake
please let us know by reply email and delete it from your system. It
is prohibited to copy this message or disclose its content to
anyone. Any confidentiality or privilege is not waived or lost by
any mistaken delivery or unauthorized disclosure of the message. All
messages sent to and from Agoda may be monitored to ensure
compliance with company policies, to protect the company's interests
and to remove potential malware. Electronic messages may be
intercepted, amended, lost or deleted, or contain viruses.




________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux