Hi All,
currently running some tests and I have run with up to 2048 without any problem.
As per the code here is what it says: #ifndef MAX_WORKER_THREADS #define MAX_WORKER_THREADS (1024 * 64) #endif
Regards JC
We’ve running with 2000 fwiw. On Oct 11, 2019, at 2:02 PM, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
Which defaults to rgw_thread_pool_size, so yeah, you can adjust that option.
To answer your actual question: we've run civetweb with 1024 threads with no problems related to the number of threads.
Paul
-- Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90
On Fri, Oct 11, 2019 at 10:50 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
you probably want to increase the number of civetweb threads, that's a parameter for civetweb in the rgw_frontends configuration (IIRC it's threads=xyz)
Also, consider upgrading and use Beast, it's so much better for rgw setups that get lots of requests.
Paul
-- Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90
On Fri, Oct 11, 2019 at 10:02 PM Benjamin.Zieglmeier <Benjamin.Zieglmeier@xxxxxxxxxx> wrote:
Hello all,
Looking for guidance on the recommended highest setting (or input on experiences from users who have a high setting) for rgw_thread_pool_size. We are running multiple Luminous 12.2.11 clusters with usually 3-4 RGW daemons in front of them. We set our rgw_thread_pool_size at 512 out of the gate, and run civetweb. We had occasional service outages in one of our clusters this week and determined the rgws were running out of available threads to handle requests. We doubled our thread pool size to 1024 on each rgw and everything has been ok so far.
What, if any, would be the high-end limit to set for rgw_thread_pool_size? I’ve been unable to find anything in the documentation or the user list that depicts anything higher than the default 100 threads.
Thanks,
Ben
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxxTo unsubscribe send an email to ceph-users-leave@xxxxxxx
|
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx