Hi,
seems like RHEL/CentOS package glusterfs-rdma (from the gluster repo) is
not working together with the Mellanox ofed.
Guess the problem is in the dependency of libibverbs which depends on
the rdma package of RHEL/CentOS.
Installation of glusterfs-rdma might break your Mellanox ofed
installation. At least on the four systems (with three different
kernels) I tried, the openibd.service was unusable because of the
rdma_cm module installed by the rdma package.
Is there something I can do in case I would like to use glusterfs but
need the Mellanox ofed or do I have to discard the idea of using
glusterfs for my setup?
Best regards,
Jochen
Am 09.11.2015 um 23:31 schrieb Jochen Becker:
Hi folks,
I have some problems starting a replica volume on a two node
infiniband setup.
Both systems are running the same hardware and Infiniband (ipoib,
ibverbs) seems to work well.
OS is Centos 7.1 fresh install and updated, Mellanox Ofed is in use,
openibd is running both gluster peers are in connected state with each
other. Creating the volume was no problem at all, but starting always
fails. Using the force option the volume seems to be started but
cannot be mounted.
Here are the entries of mnt-bricks-instances.log that happen during
the command gluster volume start instances:
[2015-11-09 22:01:00.153360] I [MSGID: 100030]
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterfsd: Started running
/usr/sbin/glusterfsd version 3.7.5 (args: /usr/sbin/glusterfsd -s
compute02 --volfile-id instances.compute02.mnt-bricks-instances -p
/var/lib/glusterd/vols/instances/run/compute02-mnt-bricks-instances.pid -S
/var/run/gluster/8f5e59a0b8d5949b51b4c276192b0725.socket --brick-name
/mnt/bricks/instances -l
/var/log/glusterfs/bricks/mnt-bricks-instances.log --xlator-option
*-posix.glusterd-uuid=52109ce5-6173-4d22-bffc-a03c71d24791
--brick-port 49152 --xlator-option instances-server.listen-port=49152
--volfile-server-transport=rdma)
[2015-11-09 22:01:00.169326] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2015-11-09 22:01:00.177472] I [graph.c:269:gf_add_cmdline_options]
0-instances-server: adding option 'listen-port' for volume
'instances-server' with value '49152'
[2015-11-09 22:01:00.177519] I [graph.c:269:gf_add_cmdline_options]
0-instances-posix: adding option 'glusterd-uuid' for volume
'instances-posix' with value '52109ce5-6173-4d22-bffc-a03c71d24791'
[2015-11-09 22:01:00.177826] I [MSGID: 115034]
[server.c:403:_check_for_auth_option] 0-/mnt/bricks/instances: skip
format check for non-addr auth option
auth.login./mnt/bricks/instances.allow
[2015-11-09 22:01:00.177913] I [MSGID: 115034]
[server.c:403:_check_for_auth_option] 0-/mnt/bricks/instances: skip
format check for non-addr auth option
auth.login.9d64b3ec-9d24-41ac-ba84-cf58c67c9b21.password
[2015-11-09 22:01:00.177916] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 2
[2015-11-09 22:01:00.179299] I
[rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
Configured rpc.outstanding-rpc-limit with value 64
[2015-11-09 22:01:00.181636] W [MSGID: 101002]
[options.c:957:xl_opt_validate] 0-instances-server: option
'listen-port' is deprecated, preferred is
'transport.rdma.listen-port', continuing with correction
[2015-11-09 22:01:00.183742] W [MSGID: 103071]
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
channel creation failed [Keine Berechtigung]
[2015-11-09 22:01:00.183782] W [MSGID: 103055] [rdma.c:4899:init]
0-rdma.instances-server: Failed to initialize IB Device
[2015-11-09 22:01:00.183796] W
[rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma'
initialization failed
[2015-11-09 22:01:00.183866] W [rpcsvc.c:1597:rpcsvc_transport_create]
0-rpc-service: cannot create listener, initing the transport failed
[2015-11-09 22:01:00.183884] W [MSGID: 115045] [server.c:1019:init]
0-instances-server: creation of listener failed
[2015-11-09 22:01:00.183898] E [MSGID: 101019]
[xlator.c:428:xlator_init] 0-instances-server: Initialization of
volume 'instances-server' failed, review your volfile again
[2015-11-09 22:01:00.183912] E [graph.c:322:glusterfs_graph_init]
0-instances-server: initializing translator failed
[2015-11-09 22:01:00.183921] E [graph.c:661:glusterfs_graph_activate]
0-graph: init failed
[2015-11-09 22:01:00.184429] W [glusterfsd.c:1236:cleanup_and_exit]
(-->/usr/sbin/glusterfsd(mgmt_getspec_cbk+0x331) [0x7f7b6aee02f1]
-->/usr/sbin/glusterfsd(glusterfs_process_volfp+0x126)
[0x7f7b6aedb0f6] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x69)
[0x7f7b6aeda6d9] ) 0-: received signum (0), shutting down
I think the problem starts at 2015-11-09 22:01:00.183742 where the
channel creation with rdma_cm fails. "Keine Berechtigung" means
something like missing permissions/rights. Module rdma_cm is loaded
and I can't find any other problem with the Infiniband or rdma. I have
no clue what is going wrong here, so any hints on how to proceed are
appreciated.
Cheers,
Jochen
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
--
Linux System Engineer
mail j.becker@xxxxxxxxxx
uvensys GmbH
Firmensitz und Sitz der Gesellschaft:
uvensys GmbH
Elsa-Brandström-Straße 3
35510 Butzbach
HRB: AG Friedberg, 7780
USt-Id: DE282879294
Geschäftsführer:
Dr. Thomas Licht, t.licht@xxxxxxxxxx
Volker Lieder, v.lieder@xxxxxxxxxx
E-Mail: info@xxxxxxxxxx
Internet: http://www.uvensys.de
Durchwahl: 06033 - 9756552
Zentrale: 06033 - 9756940
Fax: 06033 - 9756554
==========================================================
Jegliche Stellungnahmen und Meinungen dieser E-Mail sind
alleine die des Autors und nicht notwendigerweise die der
Firma. Falls erforderlich, können Sie eine gesonderte
schriftliche Bestätigung anfordern.
Any views or opinions presented in this email are solely
those of the author and do not necessarily represent those
of the company. If verification is required please request
a hard-copy version.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users