Re: help on keystone v3 ceph.conf in Jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Jean-Charles,

It was the ceph client packages on the cinder node as you suspected, I now have a working rbd driver with cinder, I am left only with one other problem since the upgrade which has me stumped:
The rados gateway, Apache can't seem to proxy to the service

<VirtualHost 192.168.10.2:6780>
  ServerName node-10.domain.local
  DocumentRoot /var/www/radosgw

  RewriteEngine On
  RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]

  SetEnv proxy-nokeepalive 1
  ProxyPass / fcgi://127.0.0.1:9000/

  ## Logging
  ErrorLog "/var/log/apache2/radosgw_error.log"
  CustomLog "/var/log/apache2/radosgw_access.log" forwarded

  AllowEncodedSlashes On
  ServerSignature Off
</VirtualHost>


The radosgw service is running and the client works

root@node-10:/etc/apache2/sites-enabled# service radosgw status /usr/bin/radosgw is running.
root@node-10:/etc/apache2/sites-enabled#  rados -p .rgw put myobject test.txt root@node-10:/etc/apache2/sites-enabled#

but the virtual host can't make a connection to fcgi

[client 193.1.202.3:42416] AH01079: failed to make connection to backend: 127.0.0.1 [Mon Sep 12 13:11:46.591957 2016] [proxy:error] [pid 8695:tid 139780608206592] AH00940: FCGI: disabled connection for (127.0.0.1) [Mon Sep 12 13:11:48.626932 2016] [proxy:error] [pid 8700:tid 139780608206592] AH00940: FCGI: disabled connection for (127.0.0.1) [Mon Sep 12 13:11:50.572243 2016] [proxy:error] [pid 8704:tid 139780616599296] (111)Connection refused: AH00957: FCGI: attempt to connect to 127.0.0.1:9000 (127.0.0.1) failed [Mon Sep 12 13:11:50.572300 2016] [proxy:error] [pid 8704:tid 139780616599296] AH00959: ap_proxy_connect_backend disabling worker for (127.0.0.1) for 60s [Mon Sep 12 13:11:50.572312 2016] [proxy_fcgi:error] [pid 8704:tid 139780616599296] [client 192.168.10.2:42484] AH01079: failed to make connection to backend: 127.0.0.1

Apache has loaded the module
apache2ctl -M | grep fast
AH00316: WARNING: MaxRequestWorkers of 2406 is not an integer multiple of  ThreadsPerChild of 25, decreasing to nearest multiple 2400,  for a maximum of 96 servers.
 fastcgi_module (shared)

and it seems to be binding on port

netstat -tulpn | grep 9000
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      11045/radosgw


can telnet to the port:
root@node-10:/etc/apache2/sites-enabled# telnet 127.0.0.1 9000 Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.

As per the ceph.conf

rgw_keystone_accepted_roles = _member_, Member, admin, swiftoperator keyring = /etc/ceph/keyring.radosgw.gateway rgw_frontends = fastcgi socket_port=9000 socket_host=127.0.0.1 rgw_socket_path = /tmp/radosgw.sock

I have noticed two things
1. the is no problem with the rados client interacting and creating object, and 2. that the S3 api seems to be up when I visit the rgw service in a browser:

<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>
<Owner>
<ID>anonymous</ID>
<DisplayName/>
</Owner>
<Buckets/>
</ListAllMyBucketsResult>


Thanks for looking!


Rob.


-----Original Message-----
From: LOPEZ Jean-Charles [mailto:jelopez@xxxxxxxxxx] 
Sent: Friday, September 9, 2016 6:10 PM
To: Robert Duncan <Robert.Duncan@xxxxxxxx>
Cc: LOPEZ Jean-Charles <jelopez@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxx>
Subject: Re:  help on keystone v3 ceph.conf in Jewel

Hi,

from the log file it looks like librbd.so doesn’t contain a specific entry point that needs to be called. See my comment inline.

Have you upgraded the ceph client packages on the cinder node and on the nova compute node? Or you just did the upgrade on the ceph nodes?

JC

> On Sep 9, 2016, at 09:37, Robert Duncan <Robert.Duncan@xxxxxxxx> wrote:
> 
> Hi,
> 
> I have deployed the Mirantis distribution of OpenStack Mitaka which comes with Ceph Hammer, since I want to use keystone v3 with radosgw I added the Ubuntu cloud archive for Mitaka on Trusty.
> And then followed the upgrade instructions (had to remove the mos 
> sources from sources.list)
> 
> Anyway the upgrade looks to have gone okay and I am now on jewel, but rdb and rgw have stopped working in the cloud - is this down to my ceph.conf?
> 
> There are no clues on keystone logs
> 
> 
> 
> [global]
> fsid = 5d587e15-5904-4fd2-84db-b4038c18e327
> mon_initial_members = node-10
> mon_host = 172.25.80.4
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> log_to_syslog_level = info
> log_to_syslog = True
> osd_pool_default_size = 2
> osd_pool_default_min_size = 1
> osd_pool_default_pg_num = 64
> public_network = 172.25.80.0/24
> log_to_syslog_facility = LOG_LOCAL0
> osd_journal_size = 2048
> auth_supported = cephx
> osd_pool_default_pgp_num = 64
> osd_mkfs_type = xfs
> cluster_network = 172.25.80.0/24
> osd_recovery_max_active = 1
> osd_max_backfills = 1
> setuser match path = /var/lib/ceph/$type/$cluster-$id
> 
> [client]
> rbd_cache_writethrough_until_flush = True rbd_cache = True
> 
> [client.radosgw.gateway]
> rgw_keystone_accepted_roles = _member_, Member, admin, swiftoperator 
> keyring = /etc/ceph/keyring.radosgw.gateway rgw_frontends = fastcgi 
> socket_port=9000 socket_host=127.0.0.1 rgw_socket_path = 
> /tmp/radosgw.sock rgw_keystone_revocation_interval = 1000000 
> rgw_keystone_url = http://172.25.90.5:35357 rgw_keystone_admin_token = 
> iaUKRVcU6dSa8xuJvJiZYkEZ host = node-10 rgw_dns_name = *.domain.local 
> rgw_print_continue = True rgw_keystone_token_cache_size = 10 rgw_data 
> = /var/lib/ceph/radosgw user = www-data
> 
> Cinder throws the following error:
> 
> 9 16:01:26 node-10 cinder-volume: 2016-09-09 16:01:26.026 3759 ERROR 
> oslo_messaging.rpc.dispatcher 
> [req-c88086a3-3d6b-42a3-9670-c4c92909423c 
> 9f4bf81c57214f88bced5e233061e71e 1cb2488ad03541df8f122b6f4907c820 - - 
> -] Exception during message handling: /usr/lib/librbd.so.1: undefined 
> symbol: _ZN8librados5Rados15aio_watch_flushEPNS_13AioCompletionE
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last):
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     incoming.message))
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 631, in create_volume
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     _run_flow()
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 618, in _run_flow
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     flow_engine.run()
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", line 224, in run
> 2016-09-09 16:01:26.026 3759 ERROR oslo_messaging.rpc.dispatcher     for _state in self.run_iter():
> <155>Sep  9 16:01:26 node-10 cinder-scheduler: 2016-09-09 16:01:26.167 4008 ERROR cinder.scheduler.filter_scheduler [req-c88086a3-3d6b-42a3-9670-c4c92909423c 9f4bf81c57214f88bced5e233061e71e 1cb2488ad03541df8f122b6f4907c820 - - -] Error scheduling None from last vol-service: rbd:volumes@RBD-backend#RBD-backend : [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task\n    result = task.execute(**arguments)\n', u'  File "/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py", line 819, in execute\n    **volume_spec)\n', u'  File "/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py", line 797, in _create_raw_volume\n    return self.driver.create_volume(volume_ref)\n', u'  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 551, in create_volume\n    self.RBDProxy().create(client.ioctx,\n', u'  File "/usr/lib
> /python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 305, in RBDProxy\n    return tpool.Proxy(self.rbd.RBD())\n', u'  File "/usr/lib/python2.7/dist-packages/rbd.py", line 147, in __init__\n', u'  File "/usr/lib/python2.7/dist-packages/rbd.py", line 133, in load_librbd\n', u'  File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__\n    self._handle = _dlopen(self._name, mode)\n', u'OSError: /usr/lib/librbd.so.1: undefined symbol: _ZN8librados5Rados15aio_watch_flushEPNS_13AioCompletionE\n’]

[This log entry indicates a missing entry point in librbd.so]

> <155>Sep  9 16:01:26 node-10 cinder-scheduler: 2016-09-09 16:01:26.175 
> 4008 ERROR cinder.scheduler.flows.create_volume 
> [req-c88086a3-3d6b-42a3-9670-c4c92909423c 
> 9f4bf81c57214f88bced5e233061e71e 1cb2488ad03541df8f122b6f4907c820 - - 
> -] Failed to run task 
> cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:c
> reate: No valid host was found. Exceeded max scheduling attempts 3 for 
> volume None root@node-10:~#
> 
> 
> thanks for looking!
> 
> Rob Duncan.
> ________________________________
> 
> The information contained and transmitted in this e-mail is confidential information, and is intended only for the named recipient to which it is addressed. The content of this e-mail may not have been sent with the authority of National College of Ireland. Any views or opinions presented are solely those of the author and do not necessarily represent those of National College of Ireland. If the reader of this message is not the named recipient or a person responsible for delivering it to the named recipient, you are notified that the review, dissemination, distribution, transmission, printing or copying, forwarding, or any other use of this message or any part of it, including any attachments, is strictly prohibited. If you have received this communication in error, please delete the e-mail and destroy all record of this communication. Thank you for your assistance.
> ________________________________
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

JC Lopez
S. Technical Instructor, Global Storage Consulting Practice Red Hat, Inc.
jelopez@xxxxxxxxxx
+1 408-680-6959

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux