Hi,
just one question coming to mind, if you intend to migrate the images
separately, is it really necessary to set up mirroring? You could just
'rbd export' on the source cluster and 'rbd import' on the destination
cluster.
Zitat von Anthony D'Atri <aad@xxxxxxxxxxxxxx>:
I would like to use mirroring to facilitate migrating from an existing
Nautilus cluster to a new cluster running Reef. RIght now I'm looking at
RBD mirroring. I have studied the RBD Mirroring section of the
documentation, but it is unclear to me which commands need to be issued on
each cluster and, for commands that have both clusters as arguments, when
to specify site-a where vs. site-b.
I won’t go into the nitty-gritty, but note that you’ll likely run
the rbd-mirror daemon on the destination cluster, and it will need
reachability to all of the source cluster’s mons and OSDs. Maybe
mgrs, not sure.
Another concern: Both the old and new cluster internally have the default
name 'Ceph' - when I set up the second cluster I saw no obvious reason to
change from the default. If these will cause a problem with mirroring, is
there a workaround?
The docs used to imply that the clusters need to have distinct
vanity names, but that was never actually the case — and vanity
names are no longer supported for clusters.
The ceph.conf files for both clusters need to be distinct and
present on the system where rbd-mirror runs. You can do this by
putting them in different subdirectories or calling them like
cephsource.conf and cephdest.conf. The filenames are arbitrary,
you’ll just have to specify them when setting up rbd-mirror peers.
In the long run I will also be migrating a bunch of RGW data. If there are
advantages to using mirroring for this I'd be glad to know.
Whole different ballgame. You can use multisite or rclone or the
new Clyso “Chorus” tool for that.
(BTW, the plan is to gradually decommission the systems from the old
cluster and add them to the new cluster. In this context, I am looking to
enable and disable mirroring on specific RBD images and RGW buckets as the
client workload is migrated from accessing the old cluster to accessing the
new.
I’ve migrated thousands of RBD volumes between clusters this way.
It gets a bit tricky if a volume is currently attached.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx