We ran some netperf comparisons measuring the overhead of enabling CONFIG_MEMCG_KMEM with a kmem limit. Short answer: no regression seen. This is a multiple machine (client,server) netperf test. Both client and server machines were running the same kernel with the same configuration. A baseline run (with CONFIG_MEMCG_KMEM unset) was compared with a full featured run (CONFIG_MEMCG_KMEM=y and a kmem limit large enough not to put additional pressure on the workload). We saw no noticeable regression running: - TCP_CRR efficiency, latency - TCP_RR latency, rate - TCP_STREAM efficiency, throughput - UDP_RR efficiency, latency The tests were run with a varying number of concurrent connections (between 1 and 200). The source came from one of Glauber's branches (git://git.kernel.org/pub/scm/linux/kernel/git/glommer/memcg kmemcg-slab): commit 70506dcf756aaafd92f4a34752d6b8d8ff4ed360 Author: Glauber Costa <glommer@xxxxxxxxxxxxx> Date: Thu Aug 16 17:16:21 2012 +0400 Add slab-specific documentation about the kmem controller It's not the latest source, but I figured the data might still be useful. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>