Yes, I'm trying to add a RGW container on a second port on the same
server. For example, I do :
ceph orch apply rgw test test --placement="ceph-monitor1:[10.50.47.3:9999]"
and this results in :
ceph orch ls
NAME RUNNING REFRESHED AGE
PLACEMENT IMAGE NAME
IMAGE ID
rgw.test.test 0/1 2s ago 5s
ceph-monitor1:[10.50.47.3:9999] docker.io/ceph/ceph:v15 <unknown>
the image and container ID being unknown is making me scratch my head. A
look in the log files show this:
2021-11-15 10:50:12,253 INFO Deploy daemon
rgw.test.test.ceph-monitor1.rtoiwh ...
2021-11-15 10:50:12,254 DEBUG Running command: /usr/bin/docker run --rm
--ipc=host --net=host --entrypoint stat -e
CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=ceph-monitor1
docker.io/ceph/ceph:v15 -c %u %g /var/lib/ceph
2021-11-15 10:50:12,452 DEBUG stat: stdout 167 167
2021-11-15 10:50:12,525 DEBUG Running command: install -d -m0770 -o 167
-g 167 /var/run/ceph/04c5d4a4-8815-45fb-b97f-027252d1aea5
2021-11-15 10:50:12,534 DEBUG Running command: systemctl daemon-reload
2021-11-15 10:50:12,869 DEBUG Running command: systemctl stop
ceph-04c5d4a4-8815-45fb-b97f-027252d1aea5@xxxxxxxxxxxxxxxxxx-monitor1.rtoiwh
2021-11-15 10:50:12,879 DEBUG Running command: systemctl reset-failed
ceph-04c5d4a4-8815-45fb-b97f-027252d1aea5@xxxxxxxxxxxxxxxxxx-monitor1.rtoiwh
2021-11-15 10:50:12,884 DEBUG systemctl: stderr Failed to reset failed
state of unit
ceph-04c5d4a4-8815-45fb-b97f-027252d1aea5@xxxxxxxxxxxxxxxxxx-monitor1.rtoiwh.service:
Unit
ceph-04c5d4a4-8815-45fb-b97f-027252d1aea5@xxxxxxxxxxxxxxxxxx-monitor1.rtoiwh.service
not loaded
journalctl -xe shows the service entered failed state, without any real
useful information
Nov 15 10:50:24 ceph-monitor1 systemd[1]:
ceph-04c5d4a4-8815-45fb-b97f-027252d1aea5@xxxxxxxxxxxxxxxxxx-monitor1.rtoiwh.service:
Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
What I understand from this is that I'm doing the right thing, it's just
my cephadm that's breaking, somehow.
On 11/15/21 5:59 AM, Eugen Block wrote:
Hi,
it's not entirely clear how your setup looks like, are you trying to
setup multiple RGW containers on the same host(s) to serve multiple
realms or do you have multiple RGWs for that?
You can add a second realm with a spec file or via cli (which you
already did). If you want to create multiple RGW containers per host
you need to specify a different port for every RGW, see the docs [1]
for some examples.
This worked just fine in my Octopus lab except for a little mistake in
the "port" spec, apparently this
spec:
port: 8000
doesn't work:
host1:~ # ceph orch apply -i rgw2.yaml
Error EINVAL: ServiceSpec: __init__() got an unexpected keyword
argument 'port'
But this does:
spec:
rgw_frontend_port: 8000
Now I have two RGW containers on each host, serving two different realms.
[1] https://docs.ceph.com/en/latest/cephadm/services/rgw/
Zitat von J-P Methot <jp.methot@xxxxxxxxxxxxxxxxx>:
Hi,
I'm testing out adding a second RGW realm to my single ceph cluster.
This is not very well documented though, since obviously realms were
designed for multi-site deployments.
Now, what I can't seem to figure is if I need to deploy a container
with cephadm to act as a frontend for this second realm and, if so,
how? I've set a frontend port and address when I created the second
realm, but my attempts at creating a RGW container for that realm
didn't work at all, with the container just not booting up.
--
Jean-Philippe Méthot
Senior Openstack system administrator
Administrateur système Openstack sénior
PlanetHoster inc.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Jean-Philippe Méthot
Senior Openstack system administrator
Administrateur système Openstack sénior
PlanetHoster inc.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx