I wonder if this would be impactful, even if `nodown` were set. When a given OSD latches onto the new replication network, I would expect it to want to use it for heartbeats — but when its heartbeat peers aren’t using the replication network yet, they won’t be reachable. Unless something has changed since I tried this with Luminous. > On Oct 20, 2020, at 12:47 AM, Eugen Block <eblock@xxxxxx> wrote: > > Hi, > > a quick search [1] shows this: > > ---snip--- > # set new config > ceph config set global cluster_network 192.168.1.0/24 > > # let orchestrator reconfigure the daemons > ceph orch daemon reconfig mon.host1 > ceph orch daemon reconfig mon.host2 > ceph orch daemon reconfig mon.host3 > ceph orch daemon reconfig osd.1 > ceph orch daemon reconfig osd.2 > ceph orch daemon reconfig osd.3 > ---snip--- > > I haven't tried it myself though. > > Regards, > Eugen > > [1] https://stackoverflow.com/questions/61763230/configure-a-cluster-network-with-cephadm > > > Zitat von Amudhan P <amudhan83@xxxxxxxxx>: > >> Hi, >> >> I have installed Ceph Octopus cluster using cephadm with a single network >> now I want to add a second network and configure it as a cluster address. >> >> How do I configure ceph to use second Network as cluster network?. >> >> Amudhan >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx