Re: help on keystone v3 ceph.conf in Jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

from the log file it looks like librbd.so doesn’t contain a specific entry point that needs to be called. See my comment inline.

Have you upgraded the ceph client packages on the cinder node and on the nova compute node? Or you just did the upgrade on the ceph nodes?

JC

> On Sep 9, 2016, at 09:37, Robert Duncan <Robert.Duncan@xxxxxxxx> wrote:
> 
> Hi,
> 
> I have deployed the Mirantis distribution of OpenStack Mitaka which comes with Ceph Hammer, since I want to use keystone v3 with radosgw I added the Ubuntu cloud archive for Mitaka on Trusty.
> And then followed the upgrade instructions (had to remove the mos sources from sources.list)
> 
> Anyway the upgrade looks to have gone okay and I am now on jewel, but rdb and rgw have stopped working in the cloud - is this down to my ceph.conf?
> 
> There are no clues on keystone logs
> 
> 
> 
> [global]
> fsid = 5d587e15-5904-4fd2-84db-b4038c18e327
> mon_initial_members = node-10
> mon_host = 172.25.80.4
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> log_to_syslog_level = info
> log_to_syslog = True
> osd_pool_default_size = 2
> osd_pool_default_min_size = 1
> osd_pool_default_pg_num = 64
> public_network = 172.25.80.0/24
> log_to_syslog_facility = LOG_LOCAL0
> osd_journal_size = 2048
> auth_supported = cephx
> osd_pool_default_pgp_num = 64
> osd_mkfs_type = xfs
> cluster_network = 172.25.80.0/24
> osd_recovery_max_active = 1
> osd_max_backfills = 1
> setuser match path = /var/lib/ceph/$type/$cluster-$id
> 
> [client]
> rbd_cache_writethrough_until_flush = True
> rbd_cache = True
> 
> [client.radosgw.gateway]
> rgw_keystone_accepted_roles = _member_, Member, admin, swiftoperator
> keyring = /etc/ceph/keyring.radosgw.gateway
> rgw_frontends = fastcgi socket_port=9000 socket_host=127.0.0.1
> rgw_socket_path = /tmp/radosgw.sock
> rgw_keystone_revocation_interval = 1000000
> rgw_keystone_url = http://172.25.90.5:35357
> rgw_keystone_admin_token = iaUKRVcU6dSa8xuJvJiZYkEZ
> host = node-10
> rgw_dns_name = *.domain.local
> rgw_print_continue = True
> rgw_keystone_token_cache_size = 10
> rgw_data = /var/lib/ceph/radosgw
> user = www-data
> 
> Cinder throws the following error:
> 
> 9 16:01:26 node-10 cinder-volume: 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher [req-c88086a3-3d6b-42a3-9670-c4c92909423c 9f4bf81c57214f88bced5e233061e71e 1cb2488ad03541df8f122b6f4907c820 - - -] Exception during message handling: /usr/lib/librbd.so.1: undefined symbol: _ZN8librados5Rados15aio_watch_flushEPNS_13AioCompletionE
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last):
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     incoming.message))
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 631, in create_volume
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     _run_flow()
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 618, in _run_flow
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     flow_engine.run()
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", line 224, in run
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     for _state in self.run_iter():
> <155>Sep  9 16:01:26 node-10 cinder-scheduler: 2016-09-09 16:01:26.167 4008 ERROR cinder.scheduler.filter_scheduler [req-c88086a3-3d6b-42a3-9670-c4c92909423c 9f4bf81c57214f88bced5e233061e71e 1cb2488ad03541df8f122b6f4907c820 - - -] Error scheduling None from last vol-service: rbd:volumes@RBD-backend#RBD-backend : [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task\n    result = task.execute(**arguments)\n', u'  File "/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py", line 819, in execute\n    **volume_spec)\n', u'  File "/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py", line 797, in _create_raw_volume\n    return self.driver.create_volume(volume_ref)\n', u'  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 551, in create_volume\n    self.RBDProxy().create(client.ioctx,\n', u'  File "/usr/lib
> /python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 305, in RBDProxy\n    return tpool.Proxy(self.rbd.RBD())\n', u'  File "/usr/lib/python2.7/dist-packages/rbd.py", line 147, in __init__\n', u'  File "/usr/lib/python2.7/dist-packages/rbd.py", line 133, in load_librbd\n', u'  File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__\n    self._handle = _dlopen(self._name, mode)\n', u'OSError: /usr/lib/librbd.so.1: undefined symbol: _ZN8librados5Rados15aio_watch_flushEPNS_13AioCompletionE\n’]

[This log entry indicates a missing entry point in librbd.so]

> <155>Sep  9 16:01:26 node-10 cinder-scheduler: 2016-09-09 16:01:26.175 4008 ERROR cinder.scheduler.flows.create_volume [req-c88086a3-3d6b-42a3-9670-c4c92909423c 9f4bf81c57214f88bced5e233061e71e 1cb2488ad03541df8f122b6f4907c820 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. Exceeded max scheduling attempts 3 for volume None
> root@node-10:~#
> 
> 
> thanks for looking!
> 
> Rob Duncan.
> ________________________________
> 
> The information contained and transmitted in this e-mail is confidential information, and is intended only for the named recipient to which it is addressed. The content of this e-mail may not have been sent with the authority of National College of Ireland. Any views or opinions presented are solely those of the author and do not necessarily represent those of National College of Ireland. If the reader of this message is not the named recipient or a person responsible for delivering it to the named recipient, you are notified that the review, dissemination, distribution, transmission, printing or copying, forwarding, or any other use of this message or any part of it, including any attachments, is strictly prohibited. If you have received this communication in error, please delete the e-mail and destroy all record of this communication. Thank you for your assistance.
> ________________________________
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

JC Lopez
S. Technical Instructor, Global Storage Consulting Practice
Red Hat, Inc.
jelopez@xxxxxxxxxx
+1 408-680-6959

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux