On 1/14/20 5:26 PM, Mina Almasry wrote: > These counters will track hugetlb reservations rather than hugetlb > memory faulted in. This patch only adds the counter, following patches > add the charging and uncharging of the counter. > > This is patch 1 of an 8 patch series. > > Problem: > Currently tasks attempting to reserve more hugetlb memory than is available get > a failure at mmap/shmget time. This is thanks to Hugetlbfs Reservations [1]. > However, if a task attempts to reserve hugetlb memory only more than its *reword* However, if a task attempts to reserve more hugetlb memory than its > hugetlb_cgroup limit allows, the kernel will allow the mmap/shmget call, > but will SIGBUS the task when it attempts to fault the memory in. *reword* but will SIGBUS the task when it attempts to fault in the excess memory. > > We have users hitting their hugetlb_cgroup limits and thus we've been > looking at this failure mode. We'd like to improve this behavior such that users > violating the hugetlb_cgroup limits get an error on mmap/shmget time, rather > than getting SIGBUS'd when they try to fault the excess memory in. This > gives the user an opportunity to fallback more gracefully to > non-hugetlbfs memory for example. > > The underlying problem is that today's hugetlb_cgroup accounting happens > at hugetlb memory *fault* time, rather than at *reservation* time. > Thus, enforcing the hugetlb_cgroup limit only happens at fault time, and > the offending task gets SIGBUS'd. > > Proposed Solution: > A new page counter named > 'hugetlb.xMB.reservation_[limit|usage|max_usage]_in_bytes'. This counter has > slightly different semantics than You changed the name to 'hugetlb.xMB.resv_[limit|usage|max_usage]_in_bytes' in the code, but left this description. Also, David suggested 'rsvd' as the abbreviation to use here. I would also prefer that name to be consistent with other hugetlb interfaces. > 'hugetlb.xMB.[limit|usage|max_usage]_in_bytes': > > - While usage_in_bytes tracks all *faulted* hugetlb memory, > reservation_usage_in_bytes tracks all *reserved* hugetlb memory and > hugetlb memory faulted in without a prior reservation. > > - If a task attempts to reserve more memory than limit_in_bytes allows, > the kernel will allow it to do so. But if a task attempts to reserve > more memory than reservation_limit_in_bytes, the kernel will fail this > reservation. > > This proposal is implemented in this patch series, with tests to verify > functionality and show the usage. > > Alternatives considered: > 1. A new cgroup, instead of only a new page_counter attached to > the existing hugetlb_cgroup. Adding a new cgroup seemed like a lot of code > duplication with hugetlb_cgroup. Keeping hugetlb related page counters under > hugetlb_cgroup seemed cleaner as well. > > 2. Instead of adding a new counter, we considered adding a sysctl that modifies > the behavior of hugetlb.xMB.[limit|usage]_in_bytes, to do accounting at > reservation time rather than fault time. Adding a new page_counter seems > better as userspace could, if it wants, choose to enforce different cgroups > differently: one via limit_in_bytes, and another via > reservation_limit_in_bytes. This could be very useful if you're > transitioning how hugetlb memory is partitioned on your system one > cgroup at a time, for example. Also, someone may find usage for both > limit_in_bytes and reservation_limit_in_bytes concurrently, and this > approach gives them the option to do so. > > Testing: > - Added tests passing. > - Used libhugetlbfs for regression testing. > > [1]: https://www.kernel.org/doc/html/latest/vm/hugetlbfs_reserv.html > > Signed-off-by: Mina Almasry <almasrymina@xxxxxxxxxx> > > --- > Changes in v10: > - Renamed reservation_* to resv.* > > --- > include/linux/hugetlb.h | 4 +- > mm/hugetlb_cgroup.c | 115 +++++++++++++++++++++++++++++++++++----- > 2 files changed, 104 insertions(+), 15 deletions(-) The code looks fine to me. With the commit message and naming updates, I will add a Reviewed-by: Please do wait a few/several days before sending a revised edition to make sure we get all feedback. I really would like to get comments from people more familiar with cgroups. -- Mike Kravetz