Re: NFS performance - default rsize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Jun 22, 2010, at 11:44 AM, Alex Still <alex.ranskis@xxxxxxxxx> wrote:

> [...]
> 
>>> On some servers this behavior returned despite rsize being set to 32k,
>>> I had to set it to 8k to get reasonnable throughput. So there's
>>> definitly something fishy going on. This has been reported on over 20
>>> machines, so I don't think it's faulty hardware we're seeing.
>>> 
>>> Any thoughts, ideas on how to debug this ?
>> 
>> Can you explain the network environment and the connectivity between the client and server some more.
> 
> Clients are blade servers. The blade chassis have integrated cisco
> switches, which are plugged to a cisco 6509. The NFS server is on
> another site 40km away, directly connected to another 6509.  These
> datacenters are linked via DWDM.
> Latency between a client and the NFS server is about half a
> millisecond. Jumbo frames are enabled.
> 
> Blades have 1 Gb link
> The NFS server has multiple 1Gb links, used for different shares
> Neither are close to full utilization, maybe 100Mb/s of traffic and 20
> 000 paquets/s at the server end

I have seen non-standard jumbo frames cause problems in the past.

Can you try unmounting shares on one client, setting the MTU to 1500, re-mount the shares and see how it works?

TCP between server and client will negotiate down to client's MSS so no need to change server's MTU.

-Ross

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux