Re: NFS client (3.0.0-3.2.8) high system CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2012-03-01 at 08:52 +0800, Steven Wilton wrote:
> Hi,
> 
> I've been trying to track down an issue we've started seeing on a bunch of NFS clients where the percentage of CPU time spent by the system has increased for the past month (it was previously using roughly equal system/user time, while now the system time is around 3x the user time).  The information that I have is as follows:
> 
> - After rebooting the server, the user/system CPU times look good (roughly equal)
> - After 24 hours of heavy activity the system CPU time increases to around 3-4x the user CPU time
> - If I run "echo 2 > /proc/sys/vm/drop_caches" the system CPU time drops back down to roughly the same as user
> 
> The main difference that I can see in slabtop between a system running at high load and "normal" load is the number of nfs_inode_cache objects (as shown below).  I tried to increase the ihash_entries and dhash_entries kernel parameters, but this did not fix the problem.  I have not found any other suggestions on how to resolve issues caused by large nfs inode caches.
> 
> I have tried various kernels between 3.0.0 and 3.2.4, and the machines are currently running a 3.0.22 kernel.  The machines have 8GB RAM, and have 3 NFSv4 mounts and one NFSv3 mount (with the majority of the files that they access being on one of the NFSv4 mount points, being a maildir style mail spool).
> 
> I have increased /proc/sys/vm/vfs_cache_pressure to 10000, which has resolved the problem for now, however I believe that the reason we started seeing the issue is that we added a lot of extra users onto the system, resulting in access to a larger number of files for each of the clients.  I am not confident that future growth will stay below whatever threshold we had exceeded to cause the excessive system CPU load, since the problem seemed to appear at around 1,000,000 nfs_inode_cache entries in slabtop, and the NFS clients are floating between 500,000 and 900,000 inode_cache entries.
> 
> Help please :), and please let me know if I can provide any more information to assist in debugging.

This looks like more of a problem for the memory management folks rather
than the NFS team. The NFS client has very limited control over the
caching of inodes.

One thing that you might try doing is turning off readdirplus (using the
-onordirplus mount option) and seeing if that causes fewer inodes to be
created in the first place.

Cheers
  Trond
-- 
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust@xxxxxxxxxx
www.netapp.com

��.n��������+%������w��{.n�����{��w���jg��������ݢj����G�������j:+v���w�m������w�������h�����٥



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux