Re: Migrating from one Ceph cluster to another

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd considered a similar migration path in the past (slowly rotate updated osds into the pool and old ones out), but then after watching some of the bugs and discussions regarding ceph cache tiering and the like between giant and hammer/jewel, I was starting to lean more towards the rbd -c oldcluster.conf export | rbd -c newcluster.conf import. That way would give you time to test out a completely independent setup for a while, do a rbd version format switch along the way, and whatever else you needed to do. Could even failback (probably with some data loss) if necessary. In theory this could also be done with minimal downtime using the snapshot diff syncing process [1], no?

Anyways, anyone have any operational experience with the rbd export | rbd import method between clusters to share?

Thanks,
Brian

[1] <http://ceph.com/planet/convert-rbd-to-format-v2/>

Michael Kuriger <mk7193@xxxxxx> 2016-06-09 16:44:
This is how I did it.  I upgraded my old cluster first (live one by one) .  Then I added my new OSD servers to my running cluster.  Once they were all added I set the weight to 0 on all my original osd's.  This causes a lot of IO but all data will be migrated to the new servers.  Then you can remove the old OSD servers from the cluster.



 
Michael Kuriger
Sr. Unix Systems Engineer

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Wido den Hollander
Sent: Thursday, June 09, 2016 12:47 AM
To: Marek Dohojda; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Migrating from one Ceph cluster to another


Op 8 juni 2016 om 22:49 schreef Marek Dohojda <mdohojda@xxxxxxxxxxxxxxxxxxx>:


I have a ceph cluster (Hammer) and I just built a new cluster
(Infernalis).  This cluster contains VM boxes based on KVM.

What I would like to do is move all the data from one ceph cluster to
another.  However the only way I could find from my google searches
would be to move each image to local disk, copy this image across to
new cluster, and import it.

I am hoping that there is a way to just synch the data (and I do
realize that KVMs will have to be down for the full migration) from
one cluster to another.


You can do this with the rbd command using export and import.

Something like:

$ rbd export image1 -|rbd import image1 -

Where you have both RBD commands connect to a different Ceph cluster. See --help on how to do that.

You can run this in a loop with the output of 'rbd ls'.

But that's about the only way.

Wido

Thank you
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_lis
tinfo.cgi_ceph-2Dusers-2Dceph.com&d=CwICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1A
amzLOSncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=lhisR2C1GH95fR5NYNEGWebX
LILh56cyhY8u9v56o6M&s=ddR_8bexw5SKK1wD5UNp9Oijw0Z0I9RnhaIJbcfUS-8&e=
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=CwICAg&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOSncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=lhisR2C1GH95fR5NYNEGWebXLILh56cyhY8u9v56o6M&s=ddR_8bexw5SKK1wD5UNp9Oijw0Z0I9RnhaIJbcfUS-8&e=
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: signature.asc
Description: Digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux