Re: Cinder volume creation issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can specify the uuid in the secret.xml file like:

<secret ephemeral='no' private='no'>
    <uuid>bdf77f5d-bf0b-1053-5f56-cd76b32520dc</uuid>
    <usage type='ceph'>
        <name>client.volumes secret</name>
  </usage>
</secret>

Then use that same uuid on all machines in cinder.conf:

rbd_secret_uuid=bdf77f5d-bf0b-1053-5f56-cd76b32520dc


Also, the column you are referring to in the OpenStack Dashboard lists the machine running the Cinder APIs, not specifically the server hosting the storage. Like Greg stated, Ceph stripes the storage across your cluster.

Fix your uuids and cinder.conf any you'll be moving in the right direction.

Cheers,
Mike


On 7/26/2013 1:32 PM, johnu wrote:
Greg,
             :) I am not getting where was the mistake in the
configuration. virsh secret-define gave  different secrets

sudo virsh secret-define --file secret.xml
<uuid of secret is output here>
sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key)



On Fri, Jul 26, 2013 at 10:16 AM, Gregory Farnum <greg@xxxxxxxxxxx
<mailto:greg@xxxxxxxxxxx>> wrote:

    On Fri, Jul 26, 2013 at 10:11 AM, johnu <johnugeorge109@xxxxxxxxx
    <mailto:johnugeorge109@xxxxxxxxx>> wrote:
     > Greg,
     > Yes, the outputs match

    Nope, they don't. :) You need the secret_uuid to be the same on each
    node, because OpenStack is generating configuration snippets on one
    node (which contain these secrets) and then shipping them to another
    node where they're actually used.

    Your secrets are also different despite having the same rbd user
    specified, so that's broken too; not quite sure how you got there...
    -Greg
    Software Engineer #42 @ http://inktank.com | http://ceph.com

     >
     > master node:
     >
     > ceph auth get-key client.volumes
     > AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ==
     >
     > virsh secret-get-value bdf77f5d-bf0b-1053-5f56-cd76b32520dc
     > AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ==
     >
     > /etc/cinder/cinder.conf
     >
     > volume_driver=cinder.volume.drivers.rbd.RBDDriver
     > rbd_pool=volumes
     > glance_api_version=2
     > rbd_user=volumes
     > rbd_secret_uuid=bdf77f5d-bf0b-1053-5f56-cd76b32520dc
     >
     >
     > slave1
     >
     > /etc/cinder/cinder.conf
     >
     > volume_driver=cinder.volume.drivers.rbd.RBDDriver
     > rbd_pool=volumes
     > glance_api_version=2
     > rbd_user=volumes
     > rbd_secret_uuid=62d0b384-50ad-2e17-15ed-66bfeda40252
     >
     >
     > virsh secret-get-value 62d0b384-50ad-2e17-15ed-66bfeda40252
     > AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ==
     >
     > slave2
     >
     > /etc/cinder/cinder.conf
     >
     > volume_driver=cinder.volume.drivers.rbd.RBDDriver
     > rbd_pool=volumes
     > glance_api_version=2
     > rbd_user=volumes
     > rbd_secret_uuid=33651ba9-5145-1fda-3e61-df6a5e6051f5
     >
     > virsh secret-get-value 33651ba9-5145-1fda-3e61-df6a5e6051f5
     > AQC/ze1R2EOWNBAAmLUE4U7zO1KafZ/CzVVTqQ==
     >
     >
     > Yes, Openstack horizon is showing same host for all volumes.
    Somehow, if
     > volume is attached to an instance lying on the same host, it works
     > otherwise, it doesn't. Might be a coincidence. And I am surprised
    that no
     > one else has seen or reported this issue. Any idea?
     >
     > On Fri, Jul 26, 2013 at 9:45 AM, Gregory Farnum <greg@xxxxxxxxxxx
    <mailto:greg@xxxxxxxxxxx>> wrote:
     >>
     >> On Fri, Jul 26, 2013 at 9:35 AM, johnu <johnugeorge109@xxxxxxxxx
    <mailto:johnugeorge109@xxxxxxxxx>> wrote:
     >> > Greg,
     >> >         I verified in all cluster nodes that rbd_secret_uuid
    is same as
     >> > virsh secret-list. And If I do virsh secret-get-value of this
    uuid, i
     >> > getting back the auth key for client.volumes.  What did you
    mean by same
     >> > configuration?. Did you mean same secret for all compute nodes?
     >>
     >> If you run "virsh secret-get-value" with that rbd_secret_uuid on
    each
     >> compute node, does it return the right secret for client.volumes?
     >>
     >> >         when we login as admin, There is a column in admin
    panel which
     >> > gives
     >> > the 'host' where the volumes lie. I know that volumes are
    striped across
     >> > the
     >> > cluster but it gives same host for all volumes. That is why ,I got
     >> > little
     >> > confused.
     >>
     >> That's not something you can get out of the RBD stack itself; is
    this
     >> something that OpenStack is showing you? I suspect it's just
    making up
     >> information to fit some API expectations, but somebody more familiar
     >> with the OpenStack guts can probably chime in.
     >> -Greg
     >> Software Engineer #42 @ http://inktank.com | http://ceph.com
     >
     >




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux