Gluster 2.6 and infiniband

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To make a long story short, I made rdma client connect files and
mounted with them directly :

#/etc/glusterd/vols/pirdist/pirdist.rdma-fuse.vol	/pirdist	glusterfs
transport=rdma	0 0
#/etc/glusterd/vols/pirstripe/pirstripe.rdma-fuse.vol	/pirstripe		glusterfs
transport=rdma	0 0

the transport=rdma does nothing here since it reads the parameters
from .vol files . However you'll see that they're now commented out
since RDMA has been very unstable for us. Servers lose their
connections to each other, which somehow causes gbe clients to lose
their connections.  IP over IB however is working great, although at
the expense of some performance vs RDMA, but it's still much better
than gbe.

On Thu, Jun 7, 2012 at 4:25 AM, bxmatus at gmail.com <bxmatus at gmail.com> wrote:
> Hello,
>
> at first it was tcp then tcp,rdma.
>
> You are right that without tcp definition ".rdma" is not working. But
> now i have another problem.
> I'm trying tcp / rdma, im trying even tcp/rdma using normal network
> card ( not using infiniband IP but normal 1gbit network
> card and i have still same speed, upload about 30mb/s and download
> about 200mb/s .. so i'm not sure if rdma is even working.
>
> Native infiniband is giving me 3500mb/s speed with benchmark tests
> (ib_rdma_bw ).
>
> thanks
>
> Matus
>
> 2012/6/7 Amar Tumballi <amarts at redhat.com>:
>> On 06/07/2012 02:04 PM, bxmatus at gmail.com wrote:
>>>
>>> Hello,
>>>
>>> i have a problem with gluster 3.2.6 and infiniband. With gluster 3.3
>>> its working ok but with 3.2.6 i have following problems:
>>>
>>> when i'm trying to mount rdma volume using command mount -t glusterfs
>>> 192.168.100.1:/atlas1.rdma mount ?i get:
>>>
>>> [2012-06-07 04:30:18.894337] I [glusterfsd.c:1493:main]
>>> 0-/usr/local/sbin/glusterfs: Started running /usr/local/sbin/glusterfs
>>> version 3.2.6
>>> [2012-06-07 04:30:18.907499] E
>>> [glusterfsd-mgmt.c:628:mgmt_getspec_cbk] 0-glusterfs: failed to get
>>> the 'volume file' from server
>>> [2012-06-07 04:30:18.907592] E
>>> [glusterfsd-mgmt.c:695:mgmt_getspec_cbk] 0-mgmt: failed to fetch
>>> volume file (key:/atlas1.rdma)
>>> [2012-06-07 04:30:18.907995] W [glusterfsd.c:727:cleanup_and_exit]
>>> (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xc9)
>>> [0x7f784e2c8bc9] (-->/usr/local/
>>> lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5) [0x7f784e2c8975]
>>> (-->/usr/local/sbin/glusterfs(mgmt_getspec_cbk+0x28b) [0x40861b])))
>>> 0-: received signum (0)
>>> , shutting down
>>> [2012-06-07 04:30:18.908049] I [fuse-bridge.c:3727:fini] 0-fuse:
>>> Unmounting 'mount'.
>>>
>>> Same command without ".rdma" works ok.
>>>
>>
>> Is the volume's transport type only 'rdma' ? or 'tcp,rdma' ? If its only
>> 'rdma', then appending ".rdma" to volume name is not required. The appending
>> of ".rdma" is only required when there are both type of transports on a
>> volume (ie, 'tcp,rdma'), as from the client you can decide which transport
>> you want to mount.
>>
>> default volume name would point to 'tcp' transport type, and appending
>> ".rdma", will point to rdma transport type.
>>
>> Hope that is clear now.
>>
>> Regards,
>> Amar
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux