Maybe i am foolish here, i am wondering what is the major benifit of running multiple RGWs instead of a single one on baremetal machine? Is it because that a single RGW has some inherent limitations on using multi threads? thanks, samuel huxiaoyu@xxxxxxxxxxxx From: Szabo, Istvan (Agoda) Date: 2021-09-13 09:52 To: huxiaoyu@xxxxxxxxxxxx; Eugen Block CC: ceph-users Subject: RE: RE: Re: How many concurrent users can be supported by a single Rados gateway Yeah, 5 instances on different ports on each baremetal machines. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo@xxxxxxxxx --------------------------------------------------- From: huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx> Sent: Monday, September 13, 2021 2:24 PM To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>; Eugen Block <eblock@xxxxxx> Cc: ceph-users <ceph-users@xxxxxxx> Subject: Re: RE: Re: How many concurrent users can be supported by a single Rados gateway Email received from the internet. If in doubt, don't click any link nor open any attachment ! Dear Istvan, Thanks a lot for sharing. I have a question: How do you run 15 RGW on 3 nodes? using VM or container, or on physical machine. I am not sure whether it is good (if possible) to run multiple RGW directly on physical machine... best regards, Samuel huxiaoyu@xxxxxxxxxxxx From: Szabo, Istvan (Agoda) Date: 2021-09-13 04:45 To: huxiaoyu@xxxxxxxxxxxx; Eugen Block CC: ceph-users Subject: RE: Re: How many concurrent users can be supported by a single Rados gateway Good topic, I'd be interested also. One of the redhat document says 1GW / 50 OSD, but I think it is not a relevant formula. I had couple of time when the users doing something stupid and totally ddos down the hole cluster. What I've done added additional 4 rgw in each of the mon/mgr nodes where the gateway is running to sustain the super high load, so currently I'm using like 15 RGW behind a haproxy loadbalancer on 3 nodes. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo@xxxxxxxxx --------------------------------------------------- -----Original Message----- From: huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx> Sent: Saturday, September 11, 2021 1:51 PM To: Eugen Block <eblock@xxxxxx> Cc: ceph-users <ceph-users@xxxxxxx> Subject: Re: How many concurrent users can be supported by a single Rados gateway Email received from the internet. If in doubt, don't click any link nor open any attachment ! ________________________________ Thanks for the suggestions. My viewpoints may be wrong, but i think stability is utmost for us, and an older version such as Luminous may be much well battle-field tested that recent ones. Unless there is some instatbilty or bug reports, I would still trust older versions. Just my own preference on which version takes my turst thanks a lot, Samuel huxiaoyu@xxxxxxxxxxxx From: Eugen Block Date: 2021-09-10 17:21 To: huxiaoyu CC: ceph-users Subject: Re: How many concurrent users can be supported by a single Rados gateway The first suggestion is to not use Luminous since it’s already EOL. We noticed major improvements in performance when upgrading from L to Nautilus, and N will also be EOL soon. Since there are some reports about performance degradation when upgrading to Pacific I would recommend to use Octopus. Zitat von huxiaoyu@xxxxxxxxxxxx: > Dear Cephers, > > I am planning a Ceph Cluster (Lumninous 12.2.13) for hosting on-line > courses for one university. The data would mostly be video media and > thus 4+2 EC coded object store together with CivetWeb RADOS gateway > will be utilized. > > We plan to use 4 physical machines as Rados gateway solely, each with > 2x Intel 6226R CPU and 256 GB memory, for serving 8000 students > concurrently, of which each may incur 2x 2Mb/s video streams. > > Are these 4-machine Rados gateway a reasonable configuration for > 8000 users, or an overkill, or insufficient? > > Suggestions and comments are highly appreciated, > > best regards, > > Samuel > > > > huxiaoyu@xxxxxxxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx