Question regard NFS 4.0 buffer sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm looking for detailed documentation regarding some of the innards of NFS 4.0, without necessarily having to read through the source code (if such documentation exists). 

Specifically related to the relationship between NFS's rsize/wsize options versus some of the lower-level networking buffers. Buffer parameters that I have come across (and my current settings) include:

  - sysctl's net.core.{r,w}mem_default: 229376
  - sysctl's net.core.{r,w}mem_max:     131071
  - sysctl's net.ipv4.tcp_{r,w}mem      4096 87380 4194304
  - #define RPCSVC_MAXPAYLOAD          (1*1024*1024u)

When I run Wireshark during an NFS transfer, I see MAX{READ,WRITE} attributes returned from GETATTR with the value of 1MB. I'm guessing this corresponds to the limit set by RPCSVR_MAXPAYLOAD? However, the maximum packet size I'm recording in practice is ~32K.

In fact, it seems like regardless of the change to {r,w}size (I've tried 32K, 64K, 128K) I am not seeing changes in the max packet size.

This is leading me to investigate the buffer sizes on the client/server. My thought is that if a buffer exists that is too small, NFS/Kernel will ship a packet prior to reaching the MAXWRITE size. 

Any input is appreciated.

-Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux