Re: Best practice regarding rgw scaling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 23, 2024 at 11:50 AM Szabo, Istvan (Agoda)
<Istvan.Szabo@xxxxxxxxx> wrote:
>
> Hi,
>
> Wonder what is the best practice to scale RGW, increase the thread numbers or spin up more gateways?
>
>
>   *
> Let's say I have 21000 connections on my haproxy
>   *
> I have 3 physical gateway servers so let's say each of them need to server 7000 connections
>
> This means with 512 thread pool size each of them needs 13 gateway altogether 39 in the cluster.
> or
> 3 gateway and each 8192 rgw thread?

with the beast frontend, rgw_max_concurrent_requests is the most
relevant config option here. while you might benefit from more than
512 threads at scale, you won't need a thread per connection

i'd also point out the relationship between concurrent requests and
memory usage: with default tunings, each PutObject
(rgw_put_obj_min_window_size) and GetObject (rgw_get_obj_window_size)
request may buffer up to 16MB of object data

>
> Thank you
>
> ________________________________
> This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux