I have osd nodes combined with mds,mgr and mon's. There are also running a few VM's on them with libvirt, however client en cluster on ipv4 (and no experience with ipv6). cluster network is on a switch not connected to the internet. - I should enable again ipv6 - enable forwarding so cluster communication is being routed, through the client interfaces? - test if the connection works between new and old - then add two vm's with monitors bringing total to 5 - then move one osd node with mon to the new location. - wait for recovery - then move the two vm mon's to the new location osd node, so there are 3 there. - move an osd node to the new location - wait for recovery Etc. Something like this? What is the idea about having 5 monitors in this migration? -----Original Message----- To: ceph-users Subject: Re: moving small production cluster to different datacenter On 1/28/20 11:19 AM, Marc Roos wrote: > > Say one is forced to move a production cluster (4 nodes) to a > different datacenter. What options do I have, other than just turning > it off at the old location and on on the new location? > > Maybe buying some extra nodes, and move one node at a time? I did this ones. This cluster was running IPv6-only (still is) and thus I had the flexibility of new IPs. First I temporarily moved the MONs from hardware to Virtual. MONMAP went from 3 to 5 MONs. Then I moved the MONs one by one to the new DC and then removed the 2 additional VMs. Then I set the 'noout' flag and moved the OSD nodes one by one. These datacenters were located very close thus each node could be moved within 20 minutes. Wait for recovery to finish and then move the next node. Keep in mind that there is/might be additional latency between the two datacenters. Wido > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx