Slow reading speed over RDMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Daniel,

I suspect that the issue is not a fault of RDMA, but of the stripe volume.
We have seen stripe volumes drop performance in many cases (and hence we
hardly recommend it). Please try to create some other distribute volume and
see the performance to make sure RDMA is alright. If you get lesser
performance on distribute volume too, then we can say its fault of RDMA.

-Amar

2011/5/12 Daniel Pereira <d.pereira at skillupjapan.co.jp>

>  Hi everyone,
>
>  I hope you can help me with some performance troubles I've been having.
>  I'm doing some tests with gluster 3.2.0, and I can't understand some of
> the
> behavior I'm getting with gluster.
>
>  The test is using a volume with 12 stripped bricks (each brick is an HD),
> with no replication, via RDMA. I'm doing random readings of 4GByte files
> with the FIO tool.
>  Gluster reading speed is of 150-160MByte/s and 110-120MByte/s per client
> if
> I connect with two clients.
>
>  However, on the same machine, I have a similar lustre setup with a
> constant
> 400MByte/s throughput, for the same tests, with the same amount of disks
> going through the same RAID controller. Both tests were ran at different
> times.
>
>  As comparison, for the sequential reading tests, both gluster and lustre
> give me, respectively, results of 600 to 700MByte/s.
>
>  The gluster configuration files have no extra modifications, the disks are
> formatted with ext3 and created via:
> gluster volume create test stripe 16 transport rdma 10.1.0.4:/disk1
> 10.1.0.4:/disk2 ... 10.1.0.4:/disk16
>
>  Using DD to write/read from each of the disks gives me about 100MByte/s.
>
>  Would there be anything obvious I'm missing? What would be causing this
> low
> reading speed? I tried playing with the performance parameters, setting up
> more or less cache and io-threads, but the results did not improve.
> Moreover, I have tested all the suggested optimization hacks/parameters
> with
> little change in the results.
>
>  A little more insight on the setup: we have 24 HD machines, with dual Xeon
> CPUs, 16GByte of RAM, working over an Areca RAID card, using JBOD, all
> boxes
> connected via 10Gbit/s Infiniband. The CPUs are far from being maxed out
> (top reports around 50% idle), as is the network.
>  I'm using CentOS 5.5 on all the machines. Let me know if you need more
> information on what's going on (configuration files, setup, etc, etc).
>
>  Thanks in advance,
> Daniel
>
> ?????????????????????????
> SkillUpJapan Corporation
> Research and Development Office
>  Senior Researcher Engineer
> Daniel Pereira d.pereira at skillupjapan.co.jp
> Tokyo, Shinjuku, Takadanobaba 1-24-16
>   Uchida Building 1st Floor
> TEL:03-5287-4087 FAX:03-5287-4135
>   http://www.skillupjapan.co.jp/
> ?????????????????????????
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20110524/93e2980e/attachment-0001.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux