Re: Multiple rbd images from different clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One thing to keep in mind when pipelining rbd export/import is that the default is just a raw image dump.

So if you have a large, but not very full, RBD, you will dump all those zeroes into the pipeline.

In our case, it was actually faster to write to a (sparse) temp file and read it in again afterwards than to pipeline.

However, we are not using --export-format 2, which I now suspect would mitigate this.

Jordan


On 6/5/2019 8:30 AM, CUZA Frédéric wrote:
Hi,

Thank you all for you quick answer.
I think that will solve our problem.

This is what we came up with this :
rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring export rbd/disk_test - | rbd -c /etc/ceph/Nceph.conf --keyring /etc/ceph/Nceph.client.admin.keyring import - rbd/disk_test

This rbd image is a test with only 5Gb of datas inside of it.

Unfortunately the command seems to be stuck and nothing happens, both ports 7800 / 6789 / 22.

We can't find no logs on any monitors.

Thanks !

-----Message d'origine-----
De : ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> De la part de Jason Dillaman
Envoyé : 04 June 2019 14:11
À : Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Cc : ceph-users <ceph-users@xxxxxxxxxxxxxx>
Objet : Re:  Multiple rbd images from different clusters

On Tue, Jun 4, 2019 at 8:07 AM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:

On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi,

On 6/4/19 10:12 AM, CUZA Frédéric wrote:

Hi everyone,



We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We do not wish to upgrade the actual cluster as all the hardware is EOS and we upgrade the configuration of the servers.

We can’t find a “proper” way to mount two rbd images from two different cluster on the same host.

Does anyone know what is the “good” procedure to achieve this ?

Copy your "/etc/ceph/ceph.conf" and associated keyrings for both
clusters to a single machine (preferably running a Mimic "rbd" client)
under "/etc/ceph/<cluster-name>.conf" and
"/etc/ceph/<cluster-name>.client.<id>.keyring".

You can then use "rbd -c <old cluster name> export --export-format 2
<image-spec> - | rbd -c <new cluster name> import --export-format=2 -
<image-spec>". The "--export-format=2" option will also copy all
associated snapshots with the images. If you don't want/need the
snapshots, just drop that optional.

That "-c" should be "--cluster" if specifying by name, otherwise with "-c" it's the full path to the two different conf files.


Just my 2 ct:

the 'rbd' commands allows specifying a configuration file (-c). You need to setup two configuration files, one for each cluster. You can also use two different cluster names (--cluster option). AFAIK the name is only used to locate the configuration file. I'm not sure how well the kernel works with mapping RBDs from two different cluster.....


If you only want to transfer RBDs from one cluster to another, you do not need to map and mount them; the 'rbd' command has the sub commands 'export' and 'import'. You can pipe them to avoid writing data to a local disk. This should be the fastest way to transfer the RBDs.


Regards,

Burkhard

--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-University Giessen
35392 Giessen, Germany
Phone: (+49) (0)641 9935810

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Jason



--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux