Re: Error when volume is attached in openstack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes. It matches for all nodes in the cluster


On Wed, Jul 24, 2013 at 1:12 PM, Abel Lopez <alopgeek@xxxxxxxxx> wrote:
You are correct, I didn't add that to nova.conf, only cinder.conf.
if you do 
virsh secret-get-value bdf77f5d-bf0b-1053-5f56-cd76b32520dc
do you see the key that you have for your client.volumes?

On Jul 24, 2013, at 12:11 PM, johnu <johnugeorge109@xxxxxxxxx> wrote:

Abel,
       What did you change in nova.conf?  . I have added rbd_username and rbd_secret_uuid in cinder.conf. I verified that rbd_secret_uuid is same as virsh secret-list .  


On Wed, Jul 24, 2013 at 11:49 AM, Abel Lopez <alopgeek@xxxxxxxxx> wrote:
One thing I had to do, and it's not really in the documentation,
I Created the secret once on 1 compute node, then I reused the UUID when creating it in the rest of the compute nodes.
I then was able to use this value in cinder.conf AND nova.conf.
 
On Jul 24, 2013, at 11:39 AM, johnu <johnugeorge109@xxxxxxxxx> wrote:

sudo virsh secret-list
UUID                                 Usage
-----------------------------------------------------------
bdf77f5d-bf0b-1053-5f56-cd76b32520dc Unused

All nodes have secret set.


On Wed, Jul 24, 2013 at 11:30 AM, Abel Lopez <alopgeek@xxxxxxxxx> wrote:
You need to do this on each compute node, and you can verify with 
virsh secret-list

On Jul 24, 2013, at 11:20 AM, johnu <johnugeorge109@xxxxxxxxx> wrote:


I was trying openstack on ceph. I could create volumes but I am not able to attach the volume to any running instance. If I attach a instance to an instance and reboot it, it goes to error state.

Compute error logs are given below.

15:32.666 ERROR nova.compute.manager [#033[01;36mreq-464776fd-2832-4f76-91fa-3e4eff173064 #033[00;36mNone None] #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] error during stop() in sync_power_state.#033[00m#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00mTraceback (most recent call last):#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m  File "/opt/stack/nova/nova/compute/manager.py", line 4421, in _sync_instance_power_state#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m    self.conductor_api.compute_stop(context, db_instance)#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m  File "/opt/stack/nova/nova/conductor/api.py", line 333, in compute_stop#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m    return self._manager.compute_stop(context, instance, do_cast)#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m  File "/opt/stack/nova/nova/conductor/rpcapi.py", line 483, in compute_stop#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m    return self.call(context, msg, version='1.43')#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m  File "/opt/stack/nova/nova/openstack/common/rpc/proxy.py", line 126, in call#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[00m    result = rpc.call(context, real_topic, msg, timeout)#0122013-07-23 17:15:32.666 TRACE nova.compute.manager #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] #033[0

Jul 23 17:17:18 slave2 2013-07-23 17:17:18.380 ERROR nova.virt.libvirt.driver [#033[01;36mreq-560b46ed-e96e-4645-a23e-3eba6f51437c #033[00;36madmin admin] #033[01;35mAn error occurred while trying to launch a defined domain with xml: <domain type='qemu'>#012  <name>instance-0000000b</name>#012  <uuid>4b58dea1-f281-4818-82da-8b9f5f923f64</uuid>#012  <memory unit='KiB'>524288</memory>#012  <currentMemory unit='KiB'>524288</currentMemory>#012  <vcpu placement='static'>1</vcpu>#012  <sysinfo type='smbios'>#012    <system>#012      <entry name='manufacturer'>OpenStack Foundation</entry>#012      <entry name='product'>OpenStack Nova</entry>#012      <entry name='version'>2013.2</entry>#012      <entry name='serial'>38047832-f758-4e6d-aedf-2d6cf02d6b1e</entry>#012      <entry name='uuid'>4b58dea1-f281-4818-82da-8b9f5f923f64</entry>#012    </system>#012  </sysinfo>#012  <os>#012    <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>#012    <kernel>/opt/stack/data/nova/instances/4b58dea1-f281-4818-82da-8b9f5f923f64/kernel</kernel>#012    <initrd>/opt/stack/data/nova/instances/4b58dea1-f281-4818-82da-8b9f5f923f64/ramdisk</initrd>#012    <cmdline>root=/dev/vda console=tty0 console=ttyS0</cmdline>#012    <boot dev='hd'/>#012    <smbios mode='sysinfo'/>#012  </os>#012  <features>#012    <acpi/>#012    <apic/>#012  </features>#012  <clock offset='utc'/>#012  <on_poweroff>destroy</on_poweroff>#012  <on_reboot>restart</on_reboot>#012  <on_crash>destroy</on_crash>#012  <devices>#012    <emulator>/usr/bin/qemu-system-x86_64</emulator>#012    <disk type='file' device='disk'>#012      <driver name='qemu' type='qcow2' cache='none'/>#012      <source file='/opt/stack/data/nova/instances/4b58dea1-f281-4818-82da-8b9f5f923f64/disk'/>#012      <target dev='vda' bus='virtio'/>#012      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>#012    </disk>#012    <disk type='network' device='disk'>#012      <driver name='qemu' type='raw' cache='none'/>#012      <auth username='volumes'>#012        <secret type='ceph' uuid='62d0b384-5


Jul 23 17:17:18 slave2 2013-07-23 17:17:18.410 ERROR nova.compute.manager [#033[01;36mreq-560b46ed-e96e-4645-a23e-3eba6f51437c #033[00;36madmin admin] #033[01;35m[instance: 4b58dea1-f281-4818-82da-8b9f5f923f64] Cannot reboot instance: internal error rbd username 'volumes' specified but secret not found#033[00m



Jul 23 17:17:18 slave2 2013-07-23 17:17:18.681 ERROR nova.openstack.common.rpc.amqp [#033[01;36mreq-560b46ed-e96e-4645-a23e-3eba6f51437c #033[00;36madmin admin] #033[01;35mException during message handling#033[00m#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00mTraceback (most recent call last):#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 426, in _process_data#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m    **args)#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m    result = getattr(proxyobj, method)(ctxt, **kwargs)#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/opt/stack/nova/nova/exception.py", line 99, in wrapped#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m    temp_level, payload)#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m    self.gen.next()#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/opt/stack/nova/nova/exception.py", line 76, in wrapped#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m    return f(self, context, *args, **kw)#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/opt/stack/nova/nova/compute/manager.py", line 228, in decorated_function#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m    pass#0122013-07-23 17:17:18.681 TRACE nova.openstack.common.rpc.amqp #033[01;35m#033[00m  File "/usr/lib/python2.7/contextlib.py",



I had setup virsh secret as given in ceph openstack. how can I verify my settings?

Thanks
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux