Merging two active ceph clusters: suggestions needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would:

Keep Cluster A intact and migrate it to your new hardware. You can do this with no downtime, assuming you have enough IOPS to support data migration and normal usage simultaneously. Bring up the new OSDs and let everything rebalance, then remove the old OSDs one at a time. Replace the MONs one at a time. Since you will have the same data on the same cluster (but different hardware), you don't need to worry about mtimes or handling RBD or S3 data at all.

Make sure you have top-level ceph credentials on the new cluster that will work for current users of Cluster B.

Use a librbd-aware tool to migrate the RBD volumes from Cluster B onto the new Cluster A. qemu-img comes to mind. This would require downtime for each volume, but not necessarily all at the same time.

Migrate your S3 user accounts from Cluster B to the new Cluster A (should be easily scriptable with e.g. JSON output from radosgw-admin).

Check for and resolve S3 bucket name conflicts between Cluster A and ClusterB.

Migrate your S3 data from Cluster B to the new Cluster A using an S3-level tool. s3cmd comes to mind.

Fine-tuning and automating the above is left as an exercise for the reader, but it should all be possible with built-in and/or commodity tools.

On Sep 20, 2014, at 11:15 PM, Robin H. Johnson <robbat2 at gentoo.org> wrote:

> For a variety of reasons, none good anymore, we have two separate Ceph
> clusters.
> 
> I would like to merge them onto the newer hardware, with as little
> downtime and data loss as possible; then discard the old hardware.
> 
> Cluster A (2 hosts):
> - 3TB of S3 content, >100k files, file mtimes important
> - <500GB of RBD volumes, exported via iscsi
> 
> Cluster B (4 hosts):
> - <50GiB of S3 content
> - 7TB of RBD volumes, exported via iscsi
> 
> Short of finding somewhere to dump all of the data from one side, and
> re-importing it after merging with that cluster as empty; are there any
> other alternatives available to me?
> 
> -- 
> Robin Hugh Johnson
> Gentoo Linux: Developer, Infrastructure Lead
> E-Mail     : robbat2 at gentoo.org
> GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux