Thanks Josh. That explains. So I guess right now with Grizzly, you can only use one rbd backend pool (assume with different cephx key for different pool) on a single Cinder node unless you are willing to modify cinder-volume.conf and restart cinder service all the time.
--weiguo > Date: Wed, 26 Jun 2013 15:08:56 -0700 > From: josh.durgin@xxxxxxxxxxx > To: wsun2@xxxxxxxxxxx > CC: ceph-users@xxxxxxxxxxxxxx; sebastien.han@xxxxxxxxxxxx > Subject: Re: Openstack Multi-rbd storage backend > > On 06/21/2013 09:48 AM, w sun wrote: > > Josh & Sebastien, > > > > Does either of you have any comments on this cephx issue with multi-rbd > > backend pools? > > > > Thx. --weiguo > > > > ------------------------------------------------------------------------ > > From: wsun2@xxxxxxxxxxx > > To: ceph-users@xxxxxxxxxxxxxx > > Date: Thu, 20 Jun 2013 17:58:34 +0000 > > Subject: Openstack Multi-rbd storage backend > > > > Anyone saw the same issue as below? > > > > We are trying to test the multi backend feature with two RBD pools on > > Grizzly release. At this point, it seems that rbd.py does not take > > separate cephx users for the two RBD pools for authentication as it > > defaults to the single ID defined in /etc/init/cinder-volume.conf, which > > is documented here with "env CEPH_ARGS="--id volume" > > > > http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder-nova-volume > > > > It seems to us that rbd.py is ignoring the separate "rbd_user=" > > configuration for each storage backend section, > > In Grizzly, this option is only used to tell nova which user to connect > as. cinder-volume requires CEPH_ARGS="--id user" to set the ceph user > you want it to use. This has changed in Havana, where the rbd_user > option is used by Cinder as well, but for Grizzly you'll need to set > the CEPH_ARGS environment variable differently if you want > different users for each backend. > > Josh > > > [svl-stack-mgmt-openstack-volumes-2] > > volume_driver=cinder.volume.drivers.rbd.RBDDriver > > rbd_pool=stack-mgmt-openstack-volumes-2 > > rbd_user=stack-mgmt-openstack-volumes-2 > > rbd_secret_uuid=e1124cad-55e8-d4ce-6c68-5f40491b15ef > > volume_backend_name=RBD_CINDER_VOLUMES_3 > > > > Here is the error from cinder-volume.log, > > > > ----------------------------------------- > > File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", > > line 144, in delete_volume > > volume['name']) > > File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 190, in > > execute > > cmd=' '.join(cmd)) > > ProcessExecutionError: Unexpected error while running command. > > Command: rbd snap ls --pool svl-stack-mgmt-openstack-volumes-2 > > volume-9f1735ae-b31f-4cd5-a279-f879692839c3 > > Exit code: 1 > > Stdout: '' > > Stderr: 'rbd: error opening image > > volume-9f1735ae-b31f-4cd5-a279-f879692839c3: (1) Operation not > > permitted\n2013-06-20 10:41:46.591363 7f68117a9780 -1 librbd::ImageCtx: > > error finding header: (1) Operation not permitted\n' > > ------------------------------------------- > |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com