Yes, that's it -- or upgrade your local Ceph client packages (if you are on luminous). On Wed, Nov 7, 2018 at 3:02 PM Uwe Sauter <uwe.sauter.de@xxxxxxxxx> wrote: > > I do have an empty disk in that server. Just go the extra step, save the export to a file and import that one? > > > > Am 07.11.18 um 20:55 schrieb Jason Dillaman: > > There was a bug in "rbd import" where it disallowed the use of stdin > > for export-format 2. This has been fixed in v12.2.9 and is in the > > pending 13.2.3 release. > > On Wed, Nov 7, 2018 at 2:46 PM Uwe Sauter <uwe.sauter.de@xxxxxxxxx> wrote: > >> > >> I tried that but it fails: > >> > >> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 2 - vdisks/vm-102-disk-2 > >> rbd: import header failed. > >> Importing image: 0% complete...failed. > >> rbd: import failed: (22) Invalid argument > >> Exporting image: 0% complete...failed. > >> rbd: export error: (32) Broken pipe > >> > >> > >> But the version seems to support that option: > >> > >> # rbd help import > >> usage: rbd import [--path <path>] [--dest-pool <dest-pool>] [--dest <dest>] > >> [--image-format <image-format>] [--new-format] > >> [--order <order>] [--object-size <object-size>] > >> [--image-feature <image-feature>] [--image-shared] > >> [--stripe-unit <stripe-unit>] > >> [--stripe-count <stripe-count>] [--data-pool <data-pool>] > >> [--journal-splay-width <journal-splay-width>] > >> [--journal-object-size <journal-object-size>] > >> [--journal-pool <journal-pool>] > >> [--sparse-size <sparse-size>] [--no-progress] > >> [--export-format <export-format>] [--pool <pool>] > >> [--image <image>] > >> <path-name> <dest-image-spec> > >> > >> > >> > >> > >> > >> Am 07.11.18 um 20:41 schrieb Jason Dillaman: > >>> If your CLI supports "--export-format 2", you can just do "rbd export > >>> --export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 - > >>> vdisks/vm-102-disk-2" (you need to specify the data format on import > >>> otherwise it will assume it's copying a raw image). > >>> On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter <uwe.sauter.de@xxxxxxxxx> wrote: > >>>> > >>>> I've been reading a bit and trying around but it seems I'm not quite where I want to be. > >>>> > >>>> I want to migrate from pool "vms" to pool "vdisks". > >>>> > >>>> # ceph osd pool ls > >>>> vms > >>>> vdisks > >>>> > >>>> # rbd ls vms > >>>> vm-101-disk-1 > >>>> vm-101-disk-2 > >>>> vm-102-disk-1 > >>>> vm-102-disk-2 > >>>> > >>>> # rbd snap ls vms/vm-102-disk-2 > >>>> SNAPID NAME SIZE TIMESTAMP > >>>> 81 SL6_81 100GiB Thu Aug 23 11:57:05 2018 > >>>> 92 SL6_82 100GiB Fri Oct 12 13:27:53 2018 > >>>> > >>>> # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import - vdisks/vm-102-disk-2 > >>>> Exporting image: 100% complete...done. > >>>> Importing image: 100% complete...done. > >>>> > >>>> # rbd snap ls vdisks/vm-102-disk-2 > >>>> (no output) > >>>> > >>>> # rbd export-diff --whole-object vms/vm-102-disk-2 - | rbd import-diff - vdisks/vm-102-disk-2 > >>>> Exporting image: 100% complete...done. > >>>> Importing image diff: 100% complete...done. > >>>> > >>>> # rbd snap ls vdisks/vm-102-disk-2 > >>>> (still no output) > >>>> > >>>> It looks like the current content is copied but not the snapshots. > >>>> > >>>> What am I doing wrong? Any help is appreciated. > >>>> > >>>> Thanks, > >>>> > >>>> Uwe > >>>> > >>>> > >>>> > >>>> Am 07.11.18 um 14:39 schrieb Uwe Sauter: > >>>>> I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks. > >>>>> > >>>>> Am 07.11.18 um 14:31 schrieb Jason Dillaman: > >>>>>> With the Mimic release, you can use "rbd deep-copy" to transfer the > >>>>>> images (and associated snapshots) to a new pool. Prior to that, you > >>>>>> could use "rbd export-diff" / "rbd import-diff" to manually transfer > >>>>>> an image and its associated snapshots. > >>>>>> On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter <uwe.sauter.de@xxxxxxxxx> wrote: > >>>>>>> > >>>>>>> Hi, > >>>>>>> > >>>>>>> I have several VM images sitting in a Ceph pool which are snapshotted. Is there a way to move such images from one pool to another > >>>>>>> and perserve the snapshots? > >>>>>>> > >>>>>>> Regards, > >>>>>>> > >>>>>>> Uwe > >>>>>>> _______________________________________________ > >>>>>>> ceph-users mailing list > >>>>>>> ceph-users@xxxxxxxxxxxxxx > >>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>> > >>> > >>> > > > > > > -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com