I realize this is a long shot, but I figure it's worth a try.
What we're seeing is that "cached" gets large (several gigs) and "free"
gets small, and when we get into this state our system responsiveness
starts dropping. If I do "echo 3 > /proc/sys/vm/drop_caches", then it
immediately frees up a couple gig of memory and things work as expected.
Based on the fact that memory is immediately reclaimable from the cache
it appears that the bulk of the cache is clean pages. This shouldn't
have a major impact on memory allocation, but at least on this setup it
does. It seems like there is something not quite right about the
mechanism that reclaims clean pages from the cache. Is anyone aware of
issues in this area for 2.6.27-vintage kernels?
Is there a way to limit how much memory gets used for the page cache?
drop_caches seems to help, but it's a really big hammer.
System details:
2.6.27 kernel, x86-64, 8GB RAM, no swap. Root is on a tmpfs filesystem,
local disks are used for miscellaneous stuff including /var/log, we have
some sizeable network-mounted filesystems.
/proc/sys/vm/overcommit_memory is set to 2, with overcommit_ratio set to
100.
Thanks,
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@xxxxxxxxxxx
www.genband.com
--
To unsubscribe from this list: send the line "unsubscribe linux-next" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html