On Fri, 15 Jun 2012, Aditya Kali wrote: > Based on the usecase at Google, I see a definite value in including > hugepage usage in memory.usage_in_bytes as well and having a single > limit for memory usage for the job. Our jobs wants to specify only one > (total) memory limit (including slab usage, and other kernel memory > usage, hugepages, etc.). > > The hugepage/smallpage requirements of the job vary during its > lifetime. Having two different limits means less flexibility for jobs > as they now have to specify their limit as (max_hugepage, > max_smallpage) instead of max(hugepage + smallpage). Two limits > complicates the API for the users and requires them to over-specify > the resources. > If a large number of hugepages, for example, are allocated on the command line because there's a lower success rate of dynamic allocation due to fragmentation, with your suggestion it would no longer allow the admin to restrict the use of those hugepages to only a particular set of tasks. Consider especially 1GB hugepagez on x86, your suggestion would treat a single 1GB hugepage which cannot be freed after boot exactly the same as using 1GB of memory which is obviously not the desired behavior of any hugetlb controller. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>