Gluster 2.6 and infiniband

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

after downgrade kernel to 2.6.28 ( on 3.2.12 is glusterd not working -
check my previous email )
i'm not able to run rdma at all, mount without rdma ( i'm using
tcp,rdma ) is working ok but speed max 150mb/s
after try to mount .rdma it fail and log contain this:

[2012-06-08 03:50:32.442263] I [glusterfsd.c:1493:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
3.2.6
[2012-06-08 03:50:32.451931] W [write-behind.c:3023:init]
0-atlas1-write-behind: disabling write-behind for first 0 bytes
[2012-06-08 03:50:32.455502] E [rdma.c:3969:rdma_init]
0-rpc-transport/rdma: Failed to get infinibanddevice context
[2012-06-08 03:50:32.455528] E [rdma.c:4813:init] 0-atlas1-client-0:
Failed to initialize IB Device
[2012-06-08 03:50:32.455541] E
[rpc-transport.c:742:rpc_transport_load] 0-rpc-transport: 'rdma'
initialization failed
[2012-06-08 03:50:32.455554] W
[rpc-clnt.c:926:rpc_clnt_connection_init] 0-atlas1-client-0: loading
of new rpc-transport failed
[2012-06-08 03:50:32.456355] E [client.c:2095:client_init_rpc]
0-atlas1-client-0: failed to initialize RPC
[2012-06-08 03:50:32.456378] E [xlator.c:1447:xlator_init]
0-atlas1-client-0: Initialization of volume 'atlas1-client-0' failed,
review your volfile again
[2012-06-08 03:50:32.456391] E [graph.c:348:glusterfs_graph_init]
0-atlas1-client-0: initializing translator failed
[2012-06-08 03:50:32.456403] E [graph.c:526:glusterfs_graph_activate]
0-graph: init failed
[2012-06-08 03:50:32.456680] W [glusterfsd.c:727:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)
[0x7f98ecea7175] (-->/usr/sbin
/glusterfs(mgmt_getspec_cbk+0xc7) [0x4089d7]
(-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x1a0) [0x406410])))
0-: received signum (0), shutting down
[2012-06-08 03:50:32.456720] I [fuse-bridge.c:3727:fini] 0-fuse:
Unmounting 'mount'.

Infiniband config is same with new and old kernel.

thanks

Matus


2012/6/7 Sabuj Pattanayek <sabujp at gmail.com>:
> To make a long story short, I made rdma client connect files and
> mounted with them directly :
>
> #/etc/glusterd/vols/pirdist/pirdist.rdma-fuse.vol ? ? ? /pirdist ? ? ? ?glusterfs
> transport=rdma ?0 0
> #/etc/glusterd/vols/pirstripe/pirstripe.rdma-fuse.vol ? /pirstripe ? ? ? ? ? ? ?glusterfs
> transport=rdma ?0 0
>
> the transport=rdma does nothing here since it reads the parameters
> from .vol files . However you'll see that they're now commented out
> since RDMA has been very unstable for us. Servers lose their
> connections to each other, which somehow causes gbe clients to lose
> their connections. ?IP over IB however is working great, although at
> the expense of some performance vs RDMA, but it's still much better
> than gbe.
>
> On Thu, Jun 7, 2012 at 4:25 AM, bxmatus at gmail.com <bxmatus at gmail.com> wrote:
>> Hello,
>>
>> at first it was tcp then tcp,rdma.
>>
>> You are right that without tcp definition ".rdma" is not working. But
>> now i have another problem.
>> I'm trying tcp / rdma, im trying even tcp/rdma using normal network
>> card ( not using infiniband IP but normal 1gbit network
>> card and i have still same speed, upload about 30mb/s and download
>> about 200mb/s .. so i'm not sure if rdma is even working.
>>
>> Native infiniband is giving me 3500mb/s speed with benchmark tests
>> (ib_rdma_bw ).
>>
>> thanks
>>
>> Matus
>>
>> 2012/6/7 Amar Tumballi <amarts at redhat.com>:
>>> On 06/07/2012 02:04 PM, bxmatus at gmail.com wrote:
>>>>
>>>> Hello,
>>>>
>>>> i have a problem with gluster 3.2.6 and infiniband. With gluster 3.3
>>>> its working ok but with 3.2.6 i have following problems:
>>>>
>>>> when i'm trying to mount rdma volume using command mount -t glusterfs
>>>> 192.168.100.1:/atlas1.rdma mount ?i get:
>>>>
>>>> [2012-06-07 04:30:18.894337] I [glusterfsd.c:1493:main]
>>>> 0-/usr/local/sbin/glusterfs: Started running /usr/local/sbin/glusterfs
>>>> version 3.2.6
>>>> [2012-06-07 04:30:18.907499] E
>>>> [glusterfsd-mgmt.c:628:mgmt_getspec_cbk] 0-glusterfs: failed to get
>>>> the 'volume file' from server
>>>> [2012-06-07 04:30:18.907592] E
>>>> [glusterfsd-mgmt.c:695:mgmt_getspec_cbk] 0-mgmt: failed to fetch
>>>> volume file (key:/atlas1.rdma)
>>>> [2012-06-07 04:30:18.907995] W [glusterfsd.c:727:cleanup_and_exit]
>>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xc9)
>>>> [0x7f784e2c8bc9] (-->/usr/local/
>>>> lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5) [0x7f784e2c8975]
>>>> (-->/usr/local/sbin/glusterfs(mgmt_getspec_cbk+0x28b) [0x40861b])))
>>>> 0-: received signum (0)
>>>> , shutting down
>>>> [2012-06-07 04:30:18.908049] I [fuse-bridge.c:3727:fini] 0-fuse:
>>>> Unmounting 'mount'.
>>>>
>>>> Same command without ".rdma" works ok.
>>>>
>>>
>>> Is the volume's transport type only 'rdma' ? or 'tcp,rdma' ? If its only
>>> 'rdma', then appending ".rdma" to volume name is not required. The appending
>>> of ".rdma" is only required when there are both type of transports on a
>>> volume (ie, 'tcp,rdma'), as from the client you can decide which transport
>>> you want to mount.
>>>
>>> default volume name would point to 'tcp' transport type, and appending
>>> ".rdma", will point to rdma transport type.
>>>
>>> Hope that is clear now.
>>>
>>> Regards,
>>> Amar
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux