On 03/06/2015 01:14 PM, David Rientjes wrote:
On Fri, 6 Mar 2015, Mike Kravetz wrote:
Thanks for the CONFIG_CGROUP_HUGETLB suggestion, however I do not
believe this will be a satisfactory solution for my usecase. As you
point out, cgroups could be set up (by a sysadmin) for every hugetlb
user/application. In this case, the sysadmin needs to have knowledge
of every huge page user/application and configure appropriately.
I was approaching this from the point of view of the application. The
application wants the guarantee of a minimum number of huge pages,
independent of other users/applications. The "reserve" approach allows
the application to set aside those pages at initialization time. If it
can not get the pages it needs, it can refuse to start, or configure
itself to use less, or take other action.
Would it be too difficult to modify the application to mmap() the
hugepages at startup so they are no longer free in the global pool but
rather get marked as reserved so other applications cannot map them? That
should return MAP_FAILED if there is an insufficient number of hugepages
available to be reserved (HugePages_Rsvd in /proc/meminfo).
The application is a database with multiple processes/tasks that will
come and go over time. I thought about having one task do a big
mmap() at initialization time, but then the issue is how to coordinate
with the other tasks and their requests to allocate/free pages.
--
Mike Kravetz
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>