Re: glfs vs. unfsd performance figures (was: Multiple NFS Servers (Gluster NFS in 3.x, unfsd, knfsd, etc.)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--- On Fri, 1/8/10, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
...
> >> On writes, NFS gets 4.4MB/s, GlusterFS (server
> side AFR) gets 4.6MB/s. Pretty even.
> >> On reads GlusterFS gets 117MB/s, NFS gets 119MB/s
> (on the first read after flushing the caches, after that it
> goes up to 600MB/s). The difference in the unbuffered
> readings seems to be in the sane ball park and the
> difference on the reads is roughly what I'd expect
> considering NFS is running UDP and GLFS is running TCP.
> >> 
...

> # The machines involved are quad core
> time make -j8 all
> 
> 1) pure ext3       
> 6:40    CPU bound
> 2) ext3       
>     15:15    rootfs (glfs, no
> cache) I/O bound
> 3) ext3+knfsd       
> 7:02    mostly network bound
> 4) ext3+unfsd        16:04
> 5) glfs       
>     61:54    rootfs (glfs, no
> cache) I/O bound
> 6) glfs+cache       
> 32:32    rootfs (glfs, no cache) I/O bound
> 7) glfs+unfsd        278:30
> 8) glfs+cache+unfsd    189:15
> 9) glfs+cache+glfs    186:43

Am I understanding correctly that all the glfs benchmarks are 
using AFR? If so, perhaps that is not a very useful comparison 
since the AFR locking might be your bottleneck with a make?  
If so, it would then not highlight any potential differences 
between your nfs server and pure glfs setup.  I think it
would be more useful to remove AFR from the picture to get
a real idea,

-Martin








[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux