Multisite RGW with two realms + ingress (haproxy/keepalived) using cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear list,

I have a problem with a multisite RGW setup and ingress with ceph
orch/cephadm. I have two realms and each is to be served on its own
port/RGW on each host.

FWIW I'm running this on Ceph v16.2.6 on CentOS 7.9 on kernel
3.10.0-1160.42.2.el7.x86_64.

I have setup two radosgw realms "default" on port 8000 and "ext" on port
8100:

```
# ceph orch apply rgw default default default-default-primary 8000
--placement="count-per-host:1;label:rgw"
# ceph orch apply rgw ext ext ext-default-primary 8100
--placement="count-per-host:1;label:rgw"
```

This is reflect in in `ceph orch ls`:

```
# ceph orch ls
NAME                 PORTS                  RUNNING  REFRESHED  AGE
 PLACEMENT
...
rgw.default          ?:8000                     6/6  9m ago     4h
count-per-host:1;label:rgw
rgw.ext              ?:8100                     6/6  9m ago     4d
count-per-host:1;label:rgw
```

Now I want to setup two separate ingress services on two separate virtual
IPs (172.16.62.26 for default, and 172.16.62.27 for ext).

I'm using the following YAML files to specify the ingress services.

```
# cat ingress.rgw.default.yml
service_type: ingress
service_id: rgw.default
placement:
  count: 6
spec:
  backend_service: rgw.default
  virtual_ip: 172.16.62.26/19
  frontend_port: 443
  monitor_port: 1967
  ssl_cert: |
    # ...

# cat ingress.rgw.ext.yaml
service_type: ingress
service_id: rgw.ext
placement:
  count: 6
spec:
  backend_service: rgw.ext
  virtual_ip: 172.16.62.27/19
  frontend_port: 443
  monitor_port: 1968
  ssl_cert: |
    # ...
```

I now observe that when connecting to `172.16.62.26` then I get directed to
a random realm. Both virtual IPs are assigned to bond0 of osd-6 for some
reason.

Below is a reproduction of the haproxy.cfg and keepalived.conf files that
are generated on my first node. I do not have much experience with either
keepalived or haproxy. However, I do not see where the virtual IP managed
by keepalived is connected to haproxy. Naively, I'd expect this to be done
here in haproxy.cfg. It looks like the haproxy monitor binds to the
configured port on all interfaces which could explain the problem that I'm
observing.

```
frontend stats
    mode http
    bind *:1968  # <--
```

Is what I want to do supported at all? From the docs I inferred that this
should be possible but they don't explicitly spell this out. Am I missing
anything? I'd love to figure this out.

Best wishes,
Manuel

Below the generated configuration.

```
# cat
/var/lib/ceph/55633ec3-6c0c-4a02-990c-0f87e0f7a01f/haproxy.rgw.default.osd-1.avmzog/haproxy/haproxy.cfg
# This file is generated by cephadm.
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/lib/haproxy/haproxy.pid
    maxconn     8000
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout queue           20s
    timeout connect         5s
    timeout http-request    1s
    timeout http-keep-alive 5s
    timeout client          1s
    timeout server          1s
    timeout check           5s
    maxconn                 8000

frontend stats
    mode http
    bind *:1967
    stats enable
    stats uri /stats
    stats refresh 10s
    stats auth admin:ivlgujuagrksajemsqyg
    http-request use-service prometheus-exporter if { path /metrics }
    monitor-uri /health

frontend frontend
    bind *:443 ssl crt /var/lib/haproxy/haproxy.pem
    default_backend backend

backend backend
    option forwardfor
    balance static-rr
    option httpchk HEAD / HTTP/1.0
    server rgw.default.osd-1.uxyfem 172.16.62.10:8000 check weight 100
    server rgw.default.osd-2.vqoyen 172.16.62.11:8000 check weight 100
    server rgw.default.osd-3.laxzlc 172.16.62.12:8000 check weight 100
    server rgw.default.osd-4.dayysd 172.16.62.13:8000 check weight 100
    server rgw.default.osd-5.xbsswv 172.16.62.30:8000 check weight 100
    server rgw.default.osd-6.rscshn 172.16.62.31:8000 check weight 100

# cat
/var/lib/ceph/55633ec3-6c0c-4a02-990c-0f87e0f7a01f/keepalived.rgw.default.osd-1.plfwau/keepalived.conf
# This file is generated by cephadm.
vrrp_script check_backend {
    script "/usr/bin/curl http://localhost:1967/health";
    weight -20
    interval 2
    rise 2
    fall 2
}

vrrp_instance VI_0 {
  state MASTER
  priority 100
  interface bond0
  virtual_router_id 51
  advert_int 1
  authentication {
      auth_type PASS
      auth_pass qghwhcnanqsltihgtpsm
  }
  unicast_src_ip 172.16.62.10
  unicast_peer {
    172.16.62.11
    172.16.62.12
    172.16.62.13
    172.16.62.30
    172.16.62.31
  }
  virtual_ipaddress {
    172.16.62.26/19 dev bond0
  }
  track_script {
      check_backend
  }

# cat
/var/lib/ceph/55633ec3-6c0c-4a02-990c-0f87e0f7a01f/haproxy.rgw.ext.osd-1.qygmbq/haproxy/haproxy.cfg
# This file is generated by cephadm.
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/lib/haproxy/haproxy.pid
    maxconn     8000
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout queue           20s
    timeout connect         5s
    timeout http-request    1s
    timeout http-keep-alive 5s
    timeout client          1s
    timeout server          1s
    timeout check           5s
    maxconn                 8000

frontend stats
    mode http
    bind *:1968
    stats enable
    stats uri /stats
    stats refresh 10s
    stats auth admin:dqbcyhkngamkwnkbuuzr
    http-request use-service prometheus-exporter if { path /metrics }
    monitor-uri /health

frontend frontend
    bind *:443 ssl crt /var/lib/haproxy/haproxy.pem
    default_backend backend

backend backend
    option forwardfor
    balance static-rr
    option httpchk HEAD / HTTP/1.0
    server rgw.ext.osd-1.zkqcbe 172.16.62.10:8100 check weight 100
    server rgw.ext.osd-2.doovpb 172.16.62.11:8100 check weight 100
    server rgw.ext.osd-3.faurwu 172.16.62.12:8100 check weight 100
    server rgw.ext.osd-4.svzpfo 172.16.62.13:8100 check weight 100
    server rgw.ext.osd-5.kbzjpx 172.16.62.30:8100 check weight 100
    server rgw.ext.osd-6.fvpnju 172.16.62.31:8100 check weight 100

# cat
/var/lib/ceph/55633ec3-6c0c-4a02-990c-0f87e0f7a01f/keepalived.rgw.ext.osd-1.pjzwqk/keepalived.conf
# This file is generated by cephadm.
vrrp_script check_backend {
    script "/usr/bin/curl http://localhost:1968/health";
    weight -20
    interval 2
    rise 2
    fall 2
}

vrrp_instance VI_0 {
  state MASTER
  priority 100
  interface bond0
  virtual_router_id 51
  advert_int 1
  authentication {
      auth_type PASS
      auth_pass xxgpymkcoeqjzkqdvcwk
  }
  unicast_src_ip 172.16.62.10
  unicast_peer {
    172.16.62.11
    172.16.62.12
    172.16.62.13
    172.16.62.30
    172.16.62.31
  }
  virtual_ipaddress {
    172.16.62.27/19 dev bond0
  }
  track_script {
      check_backend
  }
```
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux