Re: extremely slow nfs when sync enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2012-05-06 at 03:00 +0000, Daniel Pocock wrote:
> 
> I've been observing some very slow nfs write performance when the server
> has `sync' in /etc/exports
> 
> I want to avoid using async, but I have tested it and on my gigabit
> network, it gives almost the same speed as if I was on the server
> itself. (e.g. 30MB/sec to one disk, or less than 1MB/sec to the same
> disk over NFS with `sync')
> 
> I'm using Debian 6 with 2.6.38 kernels on client and server, NFSv3
> 
> I've also tried a client running Debian 7/Linux 3.2.0 with both NFSv3
> and NFSv4, speed is still slow
> 
> Looking at iostat on the server, I notice that avgrq-sz = 8 sectors
> (4096 bytes) throughout the write operations
> 
> I've tried various tests, e.g. dd a large file, or unpack a tarball with
> many small files, the iostat output is always the same

Were you using 'conv=sync'?

> Looking at /proc/mounts on the clients, everything looks good, large
> wsize, tcp:
> 
> rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.x.x.x,mountvers=3,mountport=58727,mountproto=udp,local_lock=none,addr=192.x.x.x
> 0 0
> 
> and
>  rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.x.x.x.,minorversion=0,local_lock=none,addr=192.x.x.x 0 0
> 
> and in /proc/fs/nfs/exports on the server, I have sync and wdelay:
> 
> /nfs4/daniel
> 192.168.1.0/24,192.x.x.x(rw,insecure,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9,sec=1)
> /home/daniel
> 192.168.1.0/24,192.x.x.x(rw,root_squash,sync,wdelay,no_subtree_check,uuid=aa2a6f37:9cc94eeb:bcbf983c:d6e041d9)
> 
> Can anyone suggest anything else?  Or is this really the performance hit
> of `sync'?

It really depends on your disk setup. Particularly when your filesystem
is using barriers (enabled by default on ext4 and xfs), a lot of raid
setups really _suck_ at dealing with fsync(). The latter is used every
time the NFS client sends a COMMIT or trunc() instruction, and for
pretty much all file and directory creation operations (you can use
'nfsstat' to monitor how many such operations the NFS client is sending
as part of your test).

Local disk can get away with doing a lot less fsync(), because the cache
consistency guarantees are different:
      * in NFS, the server is allowed to crash or reboot without
        affecting the client's view of the filesystem.
      * in the local file system, the expectation is that on reboot any
        data lost is won't need to be recovered (the application will
        have used fsync() for any data that does need to be persistent).
        Only the disk filesystem structures need to be recovered, and
        that is done using the journal (or fsck).


-- 
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust@xxxxxxxxxx
www.netapp.com

��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux