Keepalived configuration with cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We are running 1 test cluster ceph with cephadm. Currently last pacific (16.2.13).
We use cephadm to deploy keepalived:2.1.5 and HAProxy:2.3.
We have 3 VIPs, 1 for each instance of HAProxy.

But, we do not use the same network for managing the cluster and for the public traffic.
We have a management network to connect to the machines, and for cephadm to do the deployments, and a prod network where the connections to HAproxy will be done.

Our spec file looks like:
---
service_type: ingress
service_id: rgw.rgw
placement:
label: rgws
spec:
backend_service: rgw.rgw
virtual_ips_list:
- 10.X.X.10/24
- 10.X.X.2/24
- 10.X.X.3/24
frontend_port: 443 monitor_port: 1967

Our issue is that cephadm will populate `unicast_src_ip` and `unicast_peer` using the IPs from mgmt network and not the ones from prod network.
A quick look into the code and it seems to be design that way.

Our issue is that doing so, Keepalived instances will not talk to each other because the VRRP traffic is only allowed on our prod network.
I quicky tested removing `unicast_src_ip` and `unicast_peer` and keepalived instances where able to talk to each other.

My question, did I missed something on the configuration? Or should we add some kind of option to generate keepalived's config without `unicast_src_ip` and `unicast_peer`?

Thanks,

Luis Domingues
Proton AG
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux