Greg,
I verified in all cluster nodes that rbd_secret_uuid is same as virsh secret-list. And If I do virsh secret-get-value of this uuid, i getting back the auth key for client.volumes. What did you mean by same configuration?. Did you mean same secret for all compute nodes?On Fri, Jul 26, 2013 at 9:23 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
On Fri, Jul 26, 2013 at 9:17 AM, johnu <johnugeorge109@xxxxxxxxx> wrote:RBD volumes don't live on a given host in the cluster; they are
> Hi all,
> I need to know whether someone else also faced the same issue.
>
>
> I tried openstack + ceph integration. I have seen that I could create
> volumes from horizon and it is created in rados.
>
> When I check the created volumes in admin panel, all volumes are shown to be
> created in the same host.( I tried creating 10 volumes, but all are created
> in same host 'slave1') I I haven't changed crushmap and I am using the
> default one which came along with ceph-deploy.
striped across all of them. What do you mean the volume is "in"
slave1?
This sounds like maybe you don't have quite the same configuration on
> Second Issue
> I am not able to attach volumes to instances if hosts differ. Eg: If volumes
> are created in host 'slave1' , instance1 is created in host 'master' and
> instance2 is created in host 'slave1', I am able to attach volumes to
> instance2 but not to instance1.
both hosts. Due to the way OpenStack and virsh handle their config
fragments and secrets, you need to have the same virsh secret-IDs both
configured (in the OpenStack config files) and set (in virsh's
internal database) on every compute host and the Cinder/Nova manager.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com