Merging two active ceph clusters: suggestions needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yehuda, are there any potential problems there?  I'm wondering if duplicate
bucket names that don't have the same contents might cause problems?  Would
the second cluster be read-only while replication is running?

Robin, are the mtimes in Cluster B's S3 data important?  Just wondering if
it would be easier to move the data from B to A, and move nodes from B to A
as B shrinks.  Then remove the old A nodes when it's all done.


On Tue, Sep 23, 2014 at 10:04 PM, Yehuda Sadeh <yehuda at redhat.com> wrote:

> On Tue, Sep 23, 2014 at 7:23 PM, Robin H. Johnson <robbat2 at gentoo.org>
> wrote:
> > On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote:
> >> Keep Cluster A intact and migrate it to your new hardware. You can do
> >> this with no downtime, assuming you have enough IOPS to support data
> >> migration and normal usage simultaneously. Bring up the new OSDs and
> >> let everything rebalance, then remove the old OSDs one at a time.
> >> Replace the MONs one at a time. Since you will have the same data on
> >> the same cluster (but different hardware), you don't need to worry
> >> about mtimes or handling RBD or S3 data at all.
> > The B side already has data however, and that's one of the merge
> > problems (see below re S3).
> >
> >> Make sure you have top-level ceph credentials on the new cluster that
> >> will work for current users of Cluster B.
> >>
> >> Use a librbd-aware tool to migrate the RBD volumes from Cluster B onto
> >> the new Cluster A. qemu-img comes to mind. This would require downtime
> >> for each volume, but not necessarily all at the same time.
> > Thanks, qemu-img didn't come to mind as an RBD migration tool.
> >
> >> Migrate your S3 user accounts from Cluster B to the new Cluster A
> >> (should be easily scriptable with e.g. JSON output from
> >> radosgw-admin).
> > It's fixed now, but didn't used to be possible to create all the various
> > keys.
> >
> >> Check for and resolve S3 bucket name conflicts between Cluster A and
> >> ClusterB.
> > None.
> >
> >> Migrate your S3 data from Cluster B to the new Cluster A using an
> >> S3-level tool. s3cmd comes to mind.
> > s3cmd does not preserve mtimes, ACLs or CORS data; that's the largest
> > part of the concern.
>
> You need to setup a second rgw zone, and use the radosgw sync agent to
> sync data to the secondary zone. That will preserve mtimes and ACLs.
> Once that's complete you could then turn the secondary zone into your
> primary.
>
> Yehuda
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140924/0ede8c76/attachment-0001.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux