On 08/01/2010 21:15, Martin Fick wrote:
--- On Fri, 1/8/10, Gordan Bobic<gordan@xxxxxxxxxx> wrote:
...
On writes, NFS gets 4.4MB/s, GlusterFS (server
side AFR) gets 4.6MB/s. Pretty even.
On reads GlusterFS gets 117MB/s, NFS gets 119MB/s
(on the first read after flushing the caches, after that it
goes up to 600MB/s). The difference in the unbuffered
readings seems to be in the sane ball park and the
difference on the reads is roughly what I'd expect
considering NFS is running UDP and GLFS is running TCP.
...
# The machines involved are quad core
time make -j8 all
1) pure ext3
6:40 CPU bound
2) ext3
15:15 rootfs (glfs, no
cache) I/O bound
3) ext3+knfsd
7:02 mostly network bound
4) ext3+unfsd 16:04
5) glfs
61:54 rootfs (glfs, no
cache) I/O bound
6) glfs+cache
32:32 rootfs (glfs, no cache) I/O bound
7) glfs+unfsd 278:30
8) glfs+cache+unfsd 189:15
9) glfs+cache+glfs 186:43
Am I understanding correctly that all the glfs benchmarks are
using AFR? If so, perhaps that is not a very useful comparison
since the AFR locking might be your bottleneck with a make?
If so, it would then not highlight any potential differences
between your nfs server and pure glfs setup. I think it
would be more useful to remove AFR from the picture to get
a real idea,
I would guess that the key reason for performance deterioration vs bare
metal is fuse rather than AFR. In all cases, the slave server should
only be getting writes.
The ext3 tests are there purely as reference points and to get some idea
of difference in performance between knfsd and unfsd.
The difference between tests 5 and 7, however, is relevant because all
of that difference (all 400% of it) comes from having the extra hop
between the client and the server, since in both cases the underlying
glfs setup is the same.
However, the main point I wanted to get to the bottom of was whether
using glfs for the server<->client connection has any benefit over
unfsd, and the test quite clearly shows that it doesn't (there is no
real difference between results of tests 8 and 9).
Gordan