I map the images directly with
rbd device map -t nbd nova/<UUID>_disk
because in this case my openstack instance has direct access to the
public ceph network so it can map that image. But this is probably not
usual?
Zitat von "Stolte, Felix" <f.stolte@xxxxxxxxxxxxx>:
Hello fellow cephers,
i know this is not the openstack mailing list, but since many of us
are using ceph as a backend for openstack, maybe someone can help me
out.
I have a pool for rbds which are exported via iscsi to some bare
metal windows servers and do snaphots on that rbds regulary. Now I
want to clone a snapshot and attach the clone to an openstack
instance (for restore purpose).
I got it working, but the process is very uncomfortable and maybe
someone can point me to a faster way. For now it looks like the
following:
1. Create the clone via 'rbd clone <pool>/<image>@<snapshotname>
<pool>/<clonename>
2. Import the clone into the openstack admin project with 'cinder
manage --name=<my_volume_name> <cinderhost>@<pool>#<pool>
3. Create a volume transfer request in the admin project
4. Accept the transfer request in the desired project
5. Attach the volume to the instance
There has to be an easier and faster way to accomplish this. Has
someone a better solution?
Best regards
Felix
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx