Hi all,
With a running boot-from-volume instance backed in ceph, i launch
command to create an image from instance. All seems to work fine but if
i look in bdd i notice that location is empty
mysql>
select
* from images where id="b7674970-5d60-41da-bbb9-2ef10955fbbe"
\G;
*************************** 1. row ***************************
id: b7674970-5d60-41da-bbb9-2ef10955fbbe
name: snapshot_athena326
size: 0
status: active
is_public: 1
location: NULL
created_at: 2013-08-29 14:41:16
updated_at: 2013-08-29 14:41:16
deleted_at: NULL
deleted: 0
disk_format: raw
container_format: bare
checksum: 8e79e146ce5d2c71807362730e7b5a3b
owner: 36d462972b1d49c5850ca864b6f39d05
min_disk: 0
min_ram: 0
protected: 0
1 row in set (0.00 sec)
Bug?
Aditional info
# glance index
ID Name
Disk Format Container Format Size
------------------------------------
------------------------------ --------------------
-------------------- --------------
7729788f-b80a-4d90-b3c7-6f61f5ebd535 Ubuntu 12.04 LTS
32bits
raw bare 2147483648
b0692408-6bcf-40b1-94c6-
672154d5d7eb Ubuntu 12.04 LTS
64bits
raw bare 2147483648
I
created
a instance from image 7729788f-b80a-4d90-b3c7-6f61f5ebd535
#nova
list
+--------------------------------------+-----------+--------+----------------------------------------+
| ID | Name | Status |
Networks |
+--------------------------------------+-----------+--------+----------------------------------------+
| bffd1b30-5690-4d2f-9347-1f0b7202ee6d | athena326 | ACTIVE
|
Private_15=10.128.3.195, 88.87.208.155 |
+--------------------------------------+-----------+--------+----------------------------------------+
#nova
image-create
bffd1b30-5690-4d2f-9347-1f0b7202ee6d
snapshot_athena326
///LOGS
in
cinder_volume
2013-08-29 16:41:16 INFO cinder.volume.manager
[req-8fc22aae-a516-4f62-a836-99f63f86f144
55b70876b2d24eb393da5119cb2b8ee4 36d462972b1d49c5850ca864b6f39d05]
snapshot
snapshot-7a41d848-6d35-47a6-b3ce-7be1d3643e68: creating
2013-08-29 16:41:16 DEBUG cinder.volume.manager
[req-8fc22aae-a516-4f62-a836-99f63f86f144
55b70876b2d24eb393da5119cb2b8ee4 36d462972b1d49c5850ca864b6f39d05]
snapshot
snapshot-7a41d848-6d35-47a6-b3ce-7be1d3643e68: creating
create_snapshot /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:234
2013-08-29 16:41:16 DEBUG cinder.utils
[req-8fc22aae-a516-4f62-a836-99f63f86f144
55b70876b2d24eb393da5119cb2b8ee4 36d462972b1d49c5850ca864b6f39d05]
Running
cmd (subprocess): rbd snap create --pool volumes --snap
snapshot-7a41d848-6d35-47a6-b3ce-7be1d3643e68
volume-1b1e9684-05fa-4d8b-90a3-5bd2031c28bd execute
/usr/lib/python2.7/dist-packages/cinder/utils.py:167
2013-08-29 16:41:17 DEBUG cinder.utils
[req-8fc22aae-a516-4f62-a836-99f63f86f144
55b70876b2d24eb393da5119cb2b8ee4 36d462972b1d49c5850ca864b6f39d05]
Running
cmd (subprocess): rbd --help execute /usr/lib/python2.7/dist-packages/cinder/utils.py:167
2013-08-29 16:41:17 DEBUG cinder.utils
[req-8fc22aae-a516-4f62-a836-99f63f86f144
55b70876b2d24eb393da5119cb2b8ee4 36d462972b1d49c5850ca864b6f39d05]
Running
cmd (subprocess): rbd snap protect --pool volumes --snap
snapshot-7a41d848-6d35-47a6-b3ce-7be1d3643e68
volume-1b1e9684-05fa-4d8b-90a3-5bd2031c28bd execute
/usr/lib/python2.7/dist-packages/cinder/utils.py:167
2013-08-29 16:41:17 DEBUG cinder.volume.manager
[req-8fc22aae-a516-4f62-a836-99f63f86f144
55b70876b2d24eb393da5119cb2b8ee4 36d462972b1d49c5850ca864b6f39d05]
snapshot
snapshot-7a41d848-6d35-47a6-b3ce-7be1d3643e68: created
successfully create_snapshot /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:249
///LOGS in cinder_volume
root@nova-volume-lnx001:/home/ackstorm#
glance
index
ID Name
Disk Format Container Format Size
------------------------------------
------------------------------ --------------------
-------------------- --------------
b7674970-5d60-41da-bbb9-2ef10955fbbe snapshot_athena326
raw bare 0
7729788f-b80a-4d90-b3c7-6f61f5ebd535 Ubuntu 12.04 LTS
32bits
raw bare 2147483648
b0692408-6bcf-40b1-94c6-
672154d5d7eb Ubuntu 12.04 LTS
64bits
raw bare 2147483648
# glance image-show b7674970-5d60-41da-bbb9-2ef10955fbbe
+---------------------------------+----------------------------------------------------------------------------------------------------------------+
| Property | Value
|
+---------------------------------+----------------------------------------------------------------------------------------------------------------+
| Property 'block_device_mapping' | [{"device_name": "vda",
"delete_on_termination": true, "snapshot_id": "7a41d848-6d35-47a6-b3ce-7be1d3643e68"}]
|
| Property 'root_device_name' | /dev/vda
|
| checksum |
8e79e146ce5d2c71807362730e7b5a3b
|
| container_format | bare
|
| created_at | 2013-08-29T14:41:16
|
| deleted | False
|
| disk_format | raw
|
| id | b7674970-5d60-41da-bbb9-2ef10955fbbe
|
| is_public | True
|
| min_disk | 0
|
| min_ram | 0
|
| name | snapshot_athena326
|
| owner |
36d462972b1d49c5850ca864b6f39d05
|
| protected | False
|
| size | 0
|
| status | active
|
| updated_at | 2013-08-29T14:41:16
|
+---------------------------------+----------------------------------------------------------------------------------------------------------------+
#
glance
show b7674970-5d60-41da-bbb9-2ef10955fbbe
Id:
b7674970-5d60-41da-bbb9-2ef10955fbbe
Public:
Yes
Protected:
No
Name:
snapshot_athena326
Status:
active
Size:
0
Disk
format:
raw
Container
format:
bare
Minimum
Ram
Required (MB): 0
Minimum
Disk
Required (GB): 0
Owner:
36d462972b1d49c5850ca864b6f39d05
Property
'root_device_name':
/dev/vda
Property 'block_device_mapping': [{"device_name": "vda",
"delete_on_termination": true, "snapshot_id": "7a41d848-6d35-47a6-b3ce-7be1d3643e68"}]
Created
at:
2013-08-29T14:41:16
Updated
at:
2013-08-29T14:41:16
#
rbd
ls volumes | grep volume-1b1e9684-05fa-4d8b-90a3-5bd2031c28bd
volume-1b1e9684-05fa-4d8b-90a3-5bd2031c28bd
#
rbd
snap ls volumes/volume-1b1e9684-05fa-4d8b-90a3-5bd2031c28bd
SNAPID
NAME
SIZE
87 snapshot-0e431fb7-b24e-4ca4-ab48-0b4da63767e7 2048 MB
90 snapshot-6d99f645-96ce-4847-9f2b-5e7aa5031bd1 2048 MB
89 snapshot-7a41d848-6d35-47a6-b3ce-7be1d3643e68 2048 MB
88 snapshot-8b136189-f06c-4598-bebf-bba9817a1f90 2048 MB
Regards
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com