If I remember right, someone has done this on a live cluster without any issues. I seem to remember that it had a fallback mechanism if the OSDs couldn't be reached on the cluster network to contact them on the public network. You could test it pretty easily without much impact. Take one OSD that has both networks and configure it and restart the process. If all the nodes (specifically the old ones with only one network) is able to connect to it, then you are good to go by restarting one OSD at a time. On Wed, Mar 4, 2015 at 4:17 AM, Andrija Panic <andrija.panic@xxxxxxxxx> wrote: > Hi, > > I'm having a live cluster with only public network (so no explicit network > configuraion in the ceph.conf file) > > I'm wondering what is the procedure to implement dedicated > Replication/Private and Public network. > I've read the manual, know how to do it in ceph.conf, but I'm wondering > since this is already running cluster - what should I do after I change > ceph.conf on all nodes ? > Restarting OSDs one by one, or... ? Is there any downtime expected ? - for > the replication network to actually imlemented completely. > > > Another related quetion: > > Also, I'm demoting some old OSDs, on old servers, I will have them all > stoped, but would like to implement replication network before actually > removing old OSDs from crush map - since lot of data will be moved arround. > > My old nodes/OSDs (that will be stoped before I implement replication > network) - do NOT have dedicated NIC for replication network, in contrast to > new nodes/OSDs. So there will be still reference to these old OSD in the > crush map. > Will this be a problem - me changing/implementing replication network that > WILL work on new nodes/OSDs, but not on old ones since they don't have > dedicated NIC ? I guess not since old OSDs are stoped anyway, but would like > opinion. > > Or perhaps i might remove OSD from crush map with prior seting of > nobackfill and norecover (so no rebalancing happens) and then implement > replication netwotk? > > > Sorry for old post, but... > > Thanks, > -- > > Andrija Panić > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com