I think it will be easier. You just have to check if the latency is going to be an issue. And if you have enough space maybe increase the replication, so you can move more nodes at once? -----Original Message----- From: Rafa Wdoowski [mailto:rwadolowski@xxxxxxxxxxxxxx] Sent: 19 February 2020 14:38 To: ceph-users@xxxxxxx; Marc Roos Subject: Re: Re: Migrating/Realocating ceph cluster Yeah, I saw your thread, the problem is more complicated due to size of the cluster... I'm trying to figure out the best solution, which will minimize the downtime and migration time. Best Regards, Rafa Wdoowski On 19.02.2020 14:23, Marc Roos wrote: > I asked the same not so long ago, check archive, quite usefull replies. > > > > -----Original Message----- > Sent: 19 February 2020 14:20 > To: ceph-users@xxxxxxx > Subject: Migrating/Realocating ceph cluster > > Hi, > > I am looking for a good way of migrating/realocating ceph cluster. It > has about 2PB net, mainly RBD, but object storage is also used. The > new location is far away about 1,500 kilometers. Of course I have to > minimize the downtime of the cluster :) > > Right now I see following scenarios: > > 1. Build identical cluster. Freeze the source. Copy everything with > cppool/rbd mirror. Relocate servers and the power on them. > 2. Run cluster mirroring over network. > 3. Using cache tier. > > I'm starting research about this migration, so probably some of the > solutions above are unfeasible. > > Maybe somebody has experience about migration between DC? > > Any new ideas? Thougths? > > Every comment will be helpful > > -- > Regards, > > Rafa Wdoowski > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx