Performance gluster 3.2.5 + QLogic Infiniband

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

i am currently in the process of deploying gluster as a storage/scratch
file system for a new HPC cluster.

For storage I use HP storage arrays (12x2 TB disks, formatted with xfs,
plain vanilla options)
Performance seems to be ok as I am getting > 800 MB/sec when using hpdarm
and "dd < /dev/zero > /path/to/storage/file bs=1024k count=100"

The Infiniband fabric consists of QLE7342 cards and run the latest QLogic
OFED (based on stock 1.5.3)
Performance seems to be ok here as well as with osu_bw benchmarks I am
reaching 3.2 GB/s uni-directionally.
iperf reports 15 Gbps for ipoib which I think is not too bad
either(connected mode, MTU 65520).

The servers all run RHEL 5.8 and comprise of 2 X5690 CPUs and 24 GB RAM.


Now, if I am creating a new volume locally (transport tcp,rdma) using one
brick (about 8 TB size) on one of the storage hosts and mount it on the
same hosts as a gluster mount (rdma or non-rdma does not matter), the
read/write performance does not exceed 400 MB/s (doing the same simple dd
test as above). Same is true if I am mounting it on another node. That
means I am somehow missing about a factor of 2 in performance.

I have been reading through the mailing list and the documentation as well
and tried various options (tuning the storage, using various options with
the gluster volume, etc...) but

What could be the problem here ? Any pointers would be appreciated.

Many thanks,

Michael.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20120410/b5db1a5b/attachment.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux