Re: Question regard NFS 4.0 buffer sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 13, 2014 at 12:21:13PM +0000, McAninley, Jason wrote:
> Sorry for the delay.
> 
> > This ends up caching and the write back should happen with larger
> > sizes.
> > Is this an issue with write size only or read size as well? Did you
> > test
> > read size something like below?
> > 
> > dd if=[nfs_dir]/foo bs=1M count=500 of=/dev/null
> > 
> > You can create sparse "foo" file using truncate command.
> 
> I have not tested read speeds yet since this is a bit trickier due to avoiding the client cache. I would suspect similar results since we have mirrored read/write settings in all locations (we're aware of).
> 
>  
> > >
> > > 
> > > > Also, what kernel versions are you on?
> > >
> > > RH6.3, 2.6.32-279.el6.x86_64
> > 
> > NFS client and NFS server both using the same distro/kernel?
> 
> Yes - identical.
> 
> 
> Would multipath play any role here? I would suspect it would only help, not hinder. I have run Wireshark against the slave and the master ports with the same result - a max of ~32K packet size, regardless of the settings I listed in my original post.

I doubt it.  I don't know what's going there.

The write size might actually be too small to keep the necessary amount
of write data in flight; increasing tcp_slot_table_entries might work
around that?

Of course, since this is a Red Hat kernel that'd be a place to ask for
support unless the problem's also reproduceable on upstream kernels.

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux