This solved my problem.
i think the documentation should mention this.
also how you need to configure multiple compute nodes with the same uuid
-----Original Message-----
From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
To: "Makkelie, R - SPLXL" <Ramon.Makkelie@xxxxxxx>
Subject: Re: [ceph-users] openstack and ceph cannot attach volumes
Date: Mon, 03 Jun 2013 17:59:28 -0700
On 06/03/2013 03:37 AM, Makkelie, R - SPLXL wrote: > I have the following installed > - openstack grizzly > - ceph cuttlefish > > and followed > http://ceph.com/docs/master/rbd/rbd-openstack/ > > i can create volumes and i see them in ceph with the apporiate id's > but when i wan to attach them to a instance i get a error > > 2013-06-03 12:22:35.563 AUDIT nova.compute.manager [req-cfda1cf6-c1aa-40f0-9a5c-b9da61979c47 529a2c1a5f924206a9d82c7a86846a0d 7cc8d4da86534c4792a6839ac2927a9a] [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee4$ > 2013-06-03 12:22:35.725 ERROR nova.compute.manager [req-cfda1cf6-c1aa-40f0-9a5c-b9da61979c47 529a2c1a5f924206a9d82c7a86846a0d 7cc8d4da86534c4792a6839ac2927a9a] [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee4$ > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] Traceback (most recent call last): > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2859, in _attach_volume > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] mountpoint) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 981, in attach_volume > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] disk_dev) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] self.gen.next() > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 968, in attach_volume > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] virt_dom.attachDeviceFlags(conf.to_xml(), flags) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py", line 68, in to_xml > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] root = self.format_dom() > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/config.py", line 507, in format_dom > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] auth.set("username", self.auth_username) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "lxml.etree.pyx", line 695, in lxml.etree._Element.set (src/lxml/lxml.etree.c:35440) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "apihelpers.pxi", line 563, in lxml.etree._setAttributeValue (src/lxml/lxml.etree.c:15471) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] File "apihelpers.pxi", line 1364, in lxml.etree._utf8 (src/lxml/lxml.etree.c:22039) > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] TypeError: Argument must be bytes or unicode, got 'NoneType' > 2013-06-03 12:22:35.725 23702 TRACE nova.compute.manager [instance: b4bd7311-d4aa-402a-898d-c20ba4d1ee49] > > > does anyone know what might be the problem? It looks like you need to set rbd_user in cinder.conf.
********************************************************
For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message.
Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286
********************************************************
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com