On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote: > On 08/14/2013 02:22 PM, Michael Morgan wrote: > >Hello Everyone, > > > > I have a Ceph test cluster doing storage for an OpenStack Grizzly > > platform > >(also testing). Upgrading to 0.67 went fine on the Ceph side with the > >cluster > >showing healthy but suddenly I can't upload images into Glance anymore. The > >upload fails and glance-api throws an error: > > > >2013-08-14 15:19:55.898 ERROR glance.api.v1.images > >[4dcd9de0-af65-4902-a36d-afc5497605e7 3867c65db6cc48398a0f57ce53144e69 > >5dbca756421c4a3eb0a1cc2f1ee3c67c] Failed to upload image > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images Traceback (most > >recent call last): > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images File > >"/usr/lib/python2.6/site-packages/glance/api/v1/images.py", line 444, in > >_upload > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images > >image_meta['size']) > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images File > >"/usr/lib/python2.6/site-packages/glance/store/rbd.py", line 241, in add > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images with > >rados.Rados(conffile=self.conf_file, rados_id=self.user) as conn: > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images File > >"/usr/lib/python2.6/site-packages/rados.py", line 195, in __init__ > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images raise > >Error("Rados(): can't supply both rados_id and name") > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images Error: Rados(): > >can't supply both rados_id and name > >2013-08-14 15:19:55.898 24740 TRACE glance.api.v1.images > > This would be a backwards-compatibility regression in the librados > python bindings - a fix is in the dumpling branch, and a point > release is in the works. You could add name=None to that rados.Rados() > call in glance to work around it in the meantime. > > Josh > > > I'm not sure if there's a patch I need to track down for Glance or if I > > missed > >a change in the necessary Glance/Ceph setup. Is anyone else seeing this > >behavior? Thanks! > > > >-Mike That's what I ended up doing and it's working fine now. Thanks for the confirmation and all the hard work on OpenStack integration. I look forward to the point release. -Mike _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com