Hey Gregory, Thank you for your response. Understood! This tells me I am approaching the issue from the wrong angle I suppose. Thank you! Josh On Wed, Sep 1, 2021 at 8:26 AM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: > > On Wed, Sep 1, 2021 at 5:40 AM Joshua West <josh@xxxxxxx> wrote: > > > > Hello, > > > > 5 node cluster, with co-located mons, mgrs, mds, and osds > > > > Each node has a: > > - 192.168.xx.xx 1Gb/s connection > > - 10.xx.yy.xx 10Gb/s connection > > - 10.aa.yy.xx 50Gb/s connection > > - and a couple of unused ethernet ports > > > > 192 and 10.aa both have a switch dedicated to the network, and 10.xx > > uses a star map between notes, with some ip routing for two nodes > > which are missing a 10gbe adapter. > > > > I am currently leveraging the highest throughput connection for my > > ceph cluster (public and cluster) as everything including the "users" > > are colocated to these 5 machines. (proxmox) > > > > However, I have been having periodic, unpredictable issues with that > > network (infiniband, opensm, etc.) > > > > While those issues are fixed for now, they have me thinking about > > fault tolerance. > > > > Do monitors support multiple ip addresses to increase network fault tolerance? > > Alternatively, what is the best way to ensure that if say, the > > 10.aa... switch goes down, but all other networks are fine, that ceph > > remains connected? > > > > I see in the ceph documentation > > (https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/) > > that the public and cluster networks both support this format: > > {ip- address}/{netmask} [, {ip-address}/{netmask}] > > Unfortunately, this interface is so that you can expose a messenger v1 > interface and a messenger v2 interface — so that eg old kernel clients > can still connect, but new userspace daemons can use the messenger v2 > goodness. It won't generally bind to multiple IPs in a way that allows > failover, and nothing in the Ceph protocols support the idea of a > single monitor which can be reached at multiple network endpoints. > > So what you're asking about isn't really possible. :/ > -Greg > > > > > however I do not see confirmation in the docs as to how to address > > mons changing networks in the event of a network issue. > > > > > > Josh > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx