Re: Move rdb based image from one pool to another

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like I'm hitting this:

http://tracker.ceph.com/issues/34536

Am 07.11.18 um 20:46 schrieb Uwe Sauter:
I tried that but it fails:

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 2 - vdisks/vm-102-disk-2
rbd: import header failed.
Importing image: 0% complete...failed.
rbd: import failed: (22) Invalid argument
Exporting image: 0% complete...failed.
rbd: export error: (32) Broken pipe


But the version seems to support that option:

# rbd help import
usage: rbd import [--path <path>] [--dest-pool <dest-pool>] [--dest <dest>]
                   [--image-format <image-format>] [--new-format]
                   [--order <order>] [--object-size <object-size>]
                   [--image-feature <image-feature>] [--image-shared]
                   [--stripe-unit <stripe-unit>]
                   [--stripe-count <stripe-count>] [--data-pool <data-pool>]
                   [--journal-splay-width <journal-splay-width>]
                   [--journal-object-size <journal-object-size>]
                   [--journal-pool <journal-pool>]
                   [--sparse-size <sparse-size>] [--no-progress]
                   [--export-format <export-format>] [--pool <pool>]
                   [--image <image>]
                   <path-name> <dest-image-spec>





Am 07.11.18 um 20:41 schrieb Jason Dillaman:
If your CLI supports "--export-format 2", you can just do "rbd export
--export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 -
vdisks/vm-102-disk-2" (you need to specify the data format on import
otherwise it will assume it's copying a raw image).
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter <uwe.sauter.de@xxxxxxxxx> wrote:

I've been reading a bit and trying around but it seems I'm not quite where I want to be.

I want to migrate from pool "vms" to pool "vdisks".

# ceph osd pool ls
vms
vdisks

# rbd ls vms
vm-101-disk-1
vm-101-disk-2
vm-102-disk-1
vm-102-disk-2

# rbd snap ls vms/vm-102-disk-2
SNAPID NAME     SIZE TIMESTAMP
      81 SL6_81 100GiB Thu Aug 23 11:57:05 2018
      92 SL6_82 100GiB Fri Oct 12 13:27:53 2018

# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(no output)

# rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - vdisks/vm-102-disk-2
Exporting image: 100% complete...done.
Importing image diff: 100% complete...done.

# rbd snap ls vdisks/vm-102-disk-2
(still no output)

It looks like the current content is copied but not the snapshots.

What am I doing wrong? Any help is appreciated.

Thanks,

         Uwe



Am 07.11.18 um 14:39 schrieb Uwe Sauter:
I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.

Am 07.11.18 um 14:31 schrieb Jason Dillaman:
With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter <uwe.sauter.de@xxxxxxxxx> wrote:

Hi,

I have several VM images sitting in a Ceph pool which are snapshotted. Is there a way to move such images from one pool to another
and perserve the snapshots?

Regards,

          Uwe
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com







_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux