-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, We are experimenting with GlusterFS in a 4-node 10Gb/s InfiniBand network. We would like to achieve as high read performance (ie. throughtput) as possible for a single node. First we put together a single server - single client configuration. With this, we could achieve up to 480 MB/s (using the ib-verbs transport, of course). With 2 clients the aggregated throughput was around 600 MB/s. Copying the same (very large) file on 4 threads on one client gave 728 MB/s. Our question is, is it theoretically possible to saturate the 10Gb/s IB channel with GlusterFS in a single server - single client configuration? If yes, what are your recommendations for this? We tested the network with NetPerf (over IPoIB connected mode, MTU set to 65520), it gave a throughput of 1100 MB/s. Disk performance is not a bottleneck, since we are reading sparse files from the server and writing into /dev/null with dd on the client using bs=1M. Without the read-ahead translator, the results are a bit worse. Changing any parameter from the default resulted in a slight degradation of performance. Client CPU utilization is about 75%, but we have 4 cores in each node. Here goes the (fairly simple) configuration: server ====== volume nfs-posix type storage/posix option directory /data/nfsmode end-volume volume nfs-iothreads type performance/io-threads option thread-count 8 subvolumes nfs-posix end-volume volume nfs type performance/read-ahead subvolumes nfs-iothreads end-volume volume server type protocol/server option transport-type ib-verbs/server subvolumes nfs option auth.ip.nfs.allow * end-volume client ====== volume nfs type protocol/client option transport-type ib-verbs/client option remote-host 10.40.40.1 option remote-subvolume nfs end-volume Thanks in advance, - -- cc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHIdxYGJRwVVqzMkMRAiLpAJ9KI8yhvCpYKu/RmVQVTm+xZK+FFgCfXhW6 /IsQ3hfHSWInEDfgoBHXloc= =P4L0 -----END PGP SIGNATURE-----