Re: Migrating data into a newer ceph instance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, Luis.

 

The motivation for using the newer version is to keep up-to-date with Ceph development, since we suspect the old versioned radosgw could not be restarted possibly due to library mismatch.

Do you know whether the self-healing feature of ceph is applicable between different versions or not?

 

Fangzhe

 

From: Luis Periquito [mailto:periquito@xxxxxxxxx]
Sent: Wednesday, August 26, 2015 10:11 AM
To: Chang, Fangzhe (Fangzhe)
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Migrating data into a newer ceph instance

 

I Would say the easiest way would be to leverage all the self-healing of ceph: add the new nodes to the old cluster, allow or force all the data to migrate between nodes, and then remove the old ones out.

 

Well to be fair you could probably just install radosgw on another node and use it as your gateway without the need to even create a new OSD node...

 

Or was there a reason to create a new cluster? I can tell you that one of the clusters I have has been around since bobtail, and now it's hammer...

 

On Wed, Aug 26, 2015 at 2:50 PM, Chang, Fangzhe (Fangzhe) <fangzhe.chang@xxxxxxxxxxxxxxxxxx> wrote:

Hi,

 

We have been running Ceph/Radosgw version 0.80.7 (Giant) and stored quite some amount of data in it. We are only using ceph as an object store via radosgw. Last week cheph-radosgw daemon suddenly refused to start (with logs only showing “initialization timeout” error on Centos 7).  This triggers me to install a newer instance --- Ceph/Radosgw version 0.94.2 (Hammer). The new instance has a different set of key rings by default. The next step is to have all the data migrated. Does anyone know how to get the existing data out of the old ceph  cluster (Giant) and into the new instance (Hammer)? Please note that in the old three-node cluster ceph osd is still running but radosgw is not. Any suggestion will be greatly appreciated.

Thanks.

 

Regards,

 

Fangzhe Chang

 

 

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux