Re: Nova with Ceph generate error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Which request generated this trace?
Is it  nova-compute log?

> On 10 Jul 2015, at 07:13, Mario Codeniera <mario.codeniera@xxxxxxxxx> wrote:
> 
> Hi,
> 
> It is my first time here. I am just having an issue regarding with my configuration with the OpenStack which works perfectly for the cinder and the glance based on Kilo release in CentOS 7. I am based my documentation on this rbd-opeenstack manual.
> 
> 
> If I enable my rbd in the nova.conf it generates error like the following in the dashboard as the logs don't have any errors:
> 
> Internal Server Error (HTTP 500) (Request-ID: req-231347dd-f14c-4f97-8a1d-851a149b037c)
> Code
> 500
> Details
> File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 343, in decorated_function return function(self, context, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2737, in terminate_instance do_terminate_instance(instance, bdms) File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in inner return f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2735, in do_terminate_instance self._set_instance_error_state(context, instance) File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2725, in do_terminate_instance self._delete_instance(context, instance, bdms, quotas) File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner rv = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2694, in _delete_instance quotas.rollback() File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2664, in _delete_instance self._shutdown_instance(context, instance, bdms) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2604, in _shutdown_instance self.volume_api.detach(context, bdm.volume_id) File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 214, in wrapper res = method(self, ctx, volume_id, *args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 365, in detach cinderclient(context).volumes.detach(volume_id) File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 334, in detach return self._action('os-detach', volume) File "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 311, in _action return self.api.client.post(url, body=body) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 91, in post return self._cs_request(url, 'POST', **kwargs) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 85, in _cs_request return self.request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 80, in request return super(SessionClient, self).request(*args, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 206, in request resp = super(LegacyJsonAdapter, self).request(*args, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 95, in request return self.session.request(url, method, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 318, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 397, in request raise exceptions.from_response(resp, method, url)
> Created
> 10 Jul 2015, 4:40 a.m.
> 
> 
> Again if disable I able to work but it is generated on the compute node, as I observe too it doesn't display the hypervisor of the compute nodes, or maybe it is related.
> 
> It was working on Juno before, but there are unexpected rework as the network infrastructure was change which the I rerun the script and found lots of conflicts et al as I run before using qemu-img-rhev qemu-kvm-rhev from OVirt but seems the new hammer (Ceph repository) solve the issue.
> 
> Hope someone can enlighten.
> 
> Thanks,
> Mario
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
––––
Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: seb@xxxxxxxxxx
Address: 11 bis, rue Roquépine - 75008 Paris

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux