Re: NFS for millions of files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sep 2, 2009, at 2:08 PM, Jason Legate wrote:
Hi, I'm trying to setup a server that we can create millions of files on over NFS. When I run our creation benchmark locally I can get around 3000 files/ second in the configuration we're using now, but only around 300/ second over
NFS.  It's mounted as this:

rw ,nosuid ,nodev,noatime,nodiratime,hard,bg,nointr,rsize=32768,wsize=32768,tcp,
nfsvers=3,timeo=600,actimeo=600,nocto

When I mount the same FS over localhost instead of across the lan, it performs about full speed (the 3000/sec). Anyone have any ideas what I might tweak or
look at?

We're going to be testing various XFS/LVM configs to get the best performance, but right out the gate, NFS having a 10:1 penalty of performance doesn't bode
well.

If you are using a slow LAN (like 100Mb/s) that might be a problem.

Metadata operations (like file creation) are always slower on NFS than on local file systems. There is significantly more serialization involved for NFS since access to the file system is shared across multiple systems.

You might consider a cluster file system instead of NFS if you are driving metadata-intensive workloads while sharing amongst only local clients. Or, if you can install a fast log device for your server file system, that might mitigate the disk waits during each file creation.

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux