Dear list, hello Jason,
you may have seen my message on the Ceph mailing list about RDB pool
migration - it's a common subject that pools were created in a
sub-optimum fashion and i. e. pgnum is (not yet) reducible, so we're
looking into means to "clone" an RBD pool into a new pool within the
same cluster (including snapshots).
We had looked into creating a tool for this job, but soon noticed that
we're duplicating basic functionality of rbd-mirror. So we tested the
following, which worked out nicely:
- create a test cluster (Ceph cluster plus an Openstack cluster using
an RBD pool) and some Openstack instances
- create a second Ceph test cluster
- stop Openstack
- use rbd-mirror to clone the RBD pool from the first to the second
Ceph cluster (IOW aborting rbd-mirror once the initial coping was done)
- recreate the RDB pool on the first cluster
- use rbd-mirror to clone the mirrored pool back to the (newly
created) pool on the first cluster
- start Openstack and work with the (recreated) pool on the first cluster
So using rbd-mirror, we could clone an RBD pool's content to a
differently structured pool on the same cluster - by using an
intermediate cluster.
@Jason: Looking at the commit history for rbd-mirror, it seems you
might be able to shed some light on this: Do you see an easy way to
modify rbd-mirror in such a fashion that instead of mirroring to a
pool on a different cluster (having the same pool name as the
original), mirroring would be to a pool on the *same* cluster,
(obviously having a pool different name)?
From the "rbd cppool" perspective, a one-shot mode of operation would
be fully sufficient - but looking at the code, I have not even been
able to identify the spots where we might "cut away" the networking
part, so that rbd-mirror might do an intra-cluster job.
Are you able to judge how much work would need to be done, in order to
create a one-shot, intra-cluster version of rbd-mirror? Might it even
be something that could be a simple enhancement?
Thank you for any information and / or opinion you care to share!
With regards,
Jens
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com