Merging two active ceph clusters: suggestions needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 23, 2014 at 7:23 PM, Robin H. Johnson <robbat2 at gentoo.org> wrote:
> On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote:
>> Keep Cluster A intact and migrate it to your new hardware. You can do
>> this with no downtime, assuming you have enough IOPS to support data
>> migration and normal usage simultaneously. Bring up the new OSDs and
>> let everything rebalance, then remove the old OSDs one at a time.
>> Replace the MONs one at a time. Since you will have the same data on
>> the same cluster (but different hardware), you don't need to worry
>> about mtimes or handling RBD or S3 data at all.
> The B side already has data however, and that's one of the merge
> problems (see below re S3).
>
>> Make sure you have top-level ceph credentials on the new cluster that
>> will work for current users of Cluster B.
>>
>> Use a librbd-aware tool to migrate the RBD volumes from Cluster B onto
>> the new Cluster A. qemu-img comes to mind. This would require downtime
>> for each volume, but not necessarily all at the same time.
> Thanks, qemu-img didn't come to mind as an RBD migration tool.
>
>> Migrate your S3 user accounts from Cluster B to the new Cluster A
>> (should be easily scriptable with e.g. JSON output from
>> radosgw-admin).
> It's fixed now, but didn't used to be possible to create all the various
> keys.
>
>> Check for and resolve S3 bucket name conflicts between Cluster A and
>> ClusterB.
> None.
>
>> Migrate your S3 data from Cluster B to the new Cluster A using an
>> S3-level tool. s3cmd comes to mind.
> s3cmd does not preserve mtimes, ACLs or CORS data; that's the largest
> part of the concern.

You need to setup a second rgw zone, and use the radosgw sync agent to
sync data to the secondary zone. That will preserve mtimes and ACLs.
Once that's complete you could then turn the secondary zone into your
primary.

Yehuda


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux