On Tue, Oct 16, 2012 at 06:25:06PM +0000, Christoph Lameter wrote: > On Tue, 16 Oct 2012, Glauber Costa wrote: > > > > > + memory.kmem.limit_in_bytes # set/show hard limit for kernel memory > > + memory.kmem.usage_in_bytes # show current kernel memory allocation > > + memory.kmem.failcnt # show the number of kernel memory usage hits limits > > + memory.kmem.max_usage_in_bytes # show max kernel memory usage recorded > > Does it actually make sense to limit kernel memory? The user generally has > no idea how much kernel memory a process is using and kernel changes can > change the memory footprint. Given the fuzzy accounting in the kernel a > large cache refill (if someone configures the slab batch count to be > really big f.e.) can account a lot of memory to the wrong cgroup. The > allocation could fail. > > Limiting the total memory use of a process (U+K) would make more sense I > guess. Only U is probably sufficient? In what way would a limitation on > kernel memory in use be good? It's about preventing abuses caused by bugs or malicious use and avoiding groups stepping on each others' toes. You're saying that letting a group to allocate 32GB of paged memory is the same as 32GB of kernel memory? I don't belive sysadmins will keep a tight limit for kernel memory but rather a safety limit in case something goes wrong. usage_in_bytes will provide data to get the limits better adjusted. The innacuracy of the kmem accounting is (AFAIK) a cost tradeoff. -- Aristeu -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>