Great, thanks for sharing your solution. It would be great if you can open a tracker describing the issue so it could be fixed later in cephadm code. Best, Redo. On Tue, Jul 19, 2022 at 9:28 AM Robert Reihs <robert.reihs@xxxxxxxxx> wrote: > Hi, > I think I found the problem. We are using ipv6 only, and the config cephadm > is creating only adds the ipv4 configuration. > /etc/sysctl.d/90-ceph-FSID-keepalived.conf > # created by cephadm > > # IP forwarding and non-local bind > net.ipv4.ip_forward = 1 > net.ipv4.ip_nonlocal_bind = 1 > > I added: > net.ipv6.conf.bond1.forwarding = 1 > net.ipv6.conf.bond1.accept_source_route = 1 > net.ipv6.conf.bond1.accept_redirects = 1 > net.ipv6.ip_nonlocal_bind = 1 > > and reloading the file: sysctl -f > /etc/sysctl.d/90-ceph-FSID-keepalived.conf > Restarting the service, everything starts up. The file gets overwritten > again, so the added config dose not persists. > > Best > Robert Reihs > > On Mon, Jul 18, 2022 at 3:33 PM Robert Reihs <robert.reihs@xxxxxxxxx> > wrote: > > > Hi everyone, > > I have a problem with the haproxy settings for the rgw service. I > > specified the service in the service specification: > > --- > > service_type: rgw > > service_id: rgw > > placement: > > count: 3 > > label: "rgw" > > --- > > service_type: ingress > > service_id: rgw.rgw > > placement: > > count: 3 > > label: "ingress" > > spec: > > backend_service: rgw.rgw > > virtual_ip: ffff:ffff:ffff:404::dd:ff:10/64 > > virtual_interface_networks: ffff:ffff:ffff:404/64 > > frontend_port: 8998 > > monitor_port: 8999 > > > > The keepalived services are all started, the haproxy, only one is > started, > > the other two are in error state: > > systemd[1]: Starting Ceph haproxy.rgw.rgw.fsn1-ceph-01.ulnhyo for > > 40ddf3a6-36f1-42d2-9bf7-2fd50045e5dc... > > podman[3616202]: 2022-07-18 13:03:25.738014313 +0000 UTC m=+0.052607969 > > container create > > 25f90c4e26ebf6fc44efe12eae2c6b9d54811bfde744a78f756469e32c3f461f (image= > > docker.io/library/haproxy:2.3, name=ceph-40ddf3> > > podman[3616202]: 2022-07-18 13:03:25.787788203 +0000 UTC m=+0.102381880 > > container init > > 25f90c4e26ebf6fc44efe12eae2c6b9d54811bfde744a78f756469e32c3f461f (image= > > docker.io/library/haproxy:2.3, name=ceph-40ddf3a6> > > podman[3616202]: 2022-07-18 13:03:25.790577637 +0000 UTC m=+0.105171323 > > container start > > 25f90c4e26ebf6fc44efe12eae2c6b9d54811bfde744a78f756469e32c3f461f (image= > > docker.io/library/haproxy:2.3, name=ceph-40ddf3a> > > bash[3616202]: > > 25f90c4e26ebf6fc44efe12eae2c6b9d54811bfde744a78f756469e32c3f461f > > conmon[3616235]: [NOTICE] 198/130325 (2) : haproxy version is > > 2.3.20-2c8082e > > conmon[3616235]: [NOTICE] 198/130325 (2) : path to executable is > > /usr/local/sbin/haproxy > > conmon[3616235]: [ALERT] 198/130325 (2) : Starting frontend stats: cannot > > bind socket (Cannot assign requested address) > > [ffff:ffff:ffff:404::dd:ff:10:8999] > > conmon[3616235]: [ALERT] 198/130325 (2) : Starting frontend frontend: > > cannot bind socket (Cannot assign requested address) > > [ffff:ffff:ffff:404::dd:ff:10:8998] > > conmon[3616235]: [ALERT] 198/130325 (2) : [haproxy.main()] Some protocols > > failed to start their listeners! Exiting. > > > > I can access the IP in the browser and get the XML S3 response. > > ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy > > (stable) installed with cephadm. > > > > Any idea where the problem could be? > > Thanks > > Robert Reihs > > > > > -- > Robert Reihs > Jakobsweg 22 > 8046 Stattegg > AUSTRIA > > mobile: +43 (664) 51 035 90 > robert.reihs@xxxxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx