Re: cephadm: how to create more than 1 rgw per host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

Its easy. In ceph.conf copy the rgw fields and change 3 things.
1- name
2- log path name
3- client port.


After that feel free to start rgw service with systemctl. Check service
status and Tail the rgw log file. Try to read or write and check the logs.
If everything works as expected then you are ready to add the new service
to loadbalancer if you have one.



22 Nis 2021 Per 14:00 tarihinde ivan@xxxxxxxxxxxxx <ivan@xxxxxxxxxxxxx>
şunu yazdı:

> Does anyone know how to create more than 1 rgw per host? Surely it's not
> a rare configuration.
>
> On 2021/04/19 17:09, ivan@xxxxxxxxxxxxx wrote:
> >
> > Hi Sebastian,
> >
> > Thank you. Is there a way to create more than 1 rgw per host until
> > this new feature is released?
> >
> > On 2021/04/19 11:39, Sebastian Wagner wrote:
> >> Hi Ivan,
> >>
> >> this is a feature that is not yet released in Pacific. It seems the
> >> documentation is a bit ahead of time right now.
> >>
> >> Sebastian
> >>
> >> On Fri, Apr 16, 2021 at 10:58 PM ivan@xxxxxxxxxxxxx
> >> <mailto:ivan@xxxxxxxxxxxxx> <ivan@xxxxxxxxxxxxx
> >> <mailto:ivan@xxxxxxxxxxxxx>> wrote:
> >>
> >>     Hello,
> >>
> >>     According to the documentation, there's count-per-host key to 'ceph
> >>     orch', but it does not work for me:
> >>
> >>     :~# ceph orch apply rgw z1 sa-1 --placement='label:rgw
> >>     count-per-host:2'
> >>     --port=8000 --dry-run
> >>     Error EINVAL: Host and label are mutually exclusive
> >>
> >>     Why it says anything about Host if I don't specify any hosts,
> >>     just labels?
> >>
> >>     ~# ceph orch host ls
> >>     HOST  ADDR  LABELS       STATUS
> >>     s101  s101  mon rgw
> >>     s102  s102  mgr mon rgw
> >>     s103  s103  mon rgw
> >>     s104  s104  mgr mon rgw
> >>     s105  s105  mgr mon rgw
> >>     s106  s106  mon rgw
> >>     s107  s107  mon rgw
> >>
> >>     _______________________________________________
> >>     ceph-users mailing list -- ceph-users@xxxxxxx
> >>     <mailto:ceph-users@xxxxxxx>
> >>     To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>     <mailto:ceph-users-leave@xxxxxxx>
> >>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux