poor performance of infiniband over tcp

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
I test performance of Infiniband rdma and tcp, using glusterfs 3.2.5. There are 10 servers and 2 clients, all of them been connected with Infiniband and have the same hardwares. Two hash volumes, each have 10 bricks,  All bricks are 16TB ext4 filesystem of raid5 on different servers. One volume's transport type is rdma and the other tcp. 


One client mounts the rdma volume and the other tcp volume. Using iozone to test the performance.
root at client-1:/mnt/rdma# iozone -i 0 -i 1 -s 10g -t 10 -R
root at client-2:/mnt/tcp# iozone -i 0 -i 1 -s 10g -t 10 -R


Performance of rdma volume:  
Read: 1.1GB/s 
Write: 1.0GB/s
While the performance tcp volume:
Read: 23.5MB/s 
Write: 52.8MB/s
I have checked the network and brick filesystem, all of them are normal. I wonder why performance of infiniband over tcp are so poor. 
Thanks for any help.


Best Regards,
Luna


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131107/6e09e078/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux