I don't use cephadm, I'm using the not dockerised deployment, but how you create another one on the same host with this config? This is my RGW section: [client.rgw.xyz-cephmon-2s01.rgw0] host = xyz-cephmon-2s01 keyring = /var/lib/ceph/radosgw/ceph-rgw.xyz-cephmon-2s01.rgw0/keyring log file = /var/log/ceph/ceph-rgw-xyz-cephmon-2s01.rgw0.log rgw frontends = beast endpoint=123.456.199.1:8080 rgw thread pool size = 512 rgw_zone=FRT Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo@xxxxxxxxx --------------------------------------------------- -----Original Message----- From: by morphin <morphinwithyou@xxxxxxxxx> Sent: Thursday, April 22, 2021 6:30 PM To: ivan@xxxxxxxxxxxxx Cc: Sebastian Wagner <sewagner@xxxxxxxxxx>; Ceph Users <ceph-users@xxxxxxx> Subject: [Suspicious newsletter] Re: cephadm: how to create more than 1 rgw per host Hello. Its easy. In ceph.conf copy the rgw fields and change 3 things. 1- name 2- log path name 3- client port. After that feel free to start rgw service with systemctl. Check service status and Tail the rgw log file. Try to read or write and check the logs. If everything works as expected then you are ready to add the new service to loadbalancer if you have one. 22 Nis 2021 Per 14:00 tarihinde ivan@xxxxxxxxxxxxx <ivan@xxxxxxxxxxxxx> şunu yazdı: > Does anyone know how to create more than 1 rgw per host? Surely it's > not a rare configuration. > > On 2021/04/19 17:09, ivan@xxxxxxxxxxxxx wrote: > > > > Hi Sebastian, > > > > Thank you. Is there a way to create more than 1 rgw per host until > > this new feature is released? > > > > On 2021/04/19 11:39, Sebastian Wagner wrote: > >> Hi Ivan, > >> > >> this is a feature that is not yet released in Pacific. It seems the > >> documentation is a bit ahead of time right now. > >> > >> Sebastian > >> > >> On Fri, Apr 16, 2021 at 10:58 PM ivan@xxxxxxxxxxxxx > >> <mailto:ivan@xxxxxxxxxxxxx> <ivan@xxxxxxxxxxxxx > >> <mailto:ivan@xxxxxxxxxxxxx>> wrote: > >> > >> Hello, > >> > >> According to the documentation, there's count-per-host key to 'ceph > >> orch', but it does not work for me: > >> > >> :~# ceph orch apply rgw z1 sa-1 --placement='label:rgw > >> count-per-host:2' > >> --port=8000 --dry-run > >> Error EINVAL: Host and label are mutually exclusive > >> > >> Why it says anything about Host if I don't specify any hosts, > >> just labels? > >> > >> ~# ceph orch host ls > >> HOST ADDR LABELS STATUS > >> s101 s101 mon rgw > >> s102 s102 mgr mon rgw > >> s103 s103 mon rgw > >> s104 s104 mgr mon rgw > >> s105 s105 mgr mon rgw > >> s106 s106 mon rgw > >> s107 s107 mon rgw > >> > >> _______________________________________________ > >> ceph-users mailing list -- ceph-users@xxxxxxx > >> <mailto:ceph-users@xxxxxxx> > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > >> <mailto:ceph-users-leave@xxxxxxx> > >> > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx