From: Mark Moseley <moseleymark@xxxxxxxxx>
Subject: Re: 3.0.3 64-bit Crash running
fscache/cachefilesd
One slightly interesting thing, unrelated to fscache: This box is a
part of a pool of servers, serving the same web workloads. Another box
in this same pool is running 3.0.4, up for about 23 days (vs 6 hrs),
and the nfs_inode_cache is approximately 1/4 of the 3.1.0-rc8's,
size-wise, 1/3 #ofobjects-wise; likewise dentry in a 3.0.4 box with a
much longer uptime is about 1/9 the size (200k objs vs 1.8mil objects,
45megs vs 400megs) as the 3.1.0-rc8 box. Dunno if that's the result of
VM improvements or a symptom of something leaking :)
Have you tweaked your dentry and inode hash table settings yet?
Mine are: dhash_entries=536870912 ihash_entries=268435456
It doesn't actually use that many as there's a hardcoded limit of 5% of
memory (kernel recompile to adjust upwards) - but it's worthwhile doing
on a big-memory fileserving box with lots of directories and file in the
FS as the penalties attributable to hashing/walking the hash tables are
far less than the penalties for disk seeks.
The tweak is more applicable on the fileserver than on nfs clients but
its worth looking into if the client needs to traverse large directory
structures
(FWIW I'm also playing around with using zramswap on a client - it makes
a big difference, as does tweaking sysctl.conf on both client and server)
--
Linux-cachefs mailing list
Linux-cachefs@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cachefs