On Tue, Jul 05, 2022 at 10:16:39AM +0800, Miaohe Lin wrote: > On 2022/7/4 9:33, Naoya Horiguchi wrote: > > From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > > > > I found a weird state of 1GB hugepage pool, caused by the following > > procedure: > > > > - run a process reserving all free 1GB hugepages, > > - shrink free 1GB hugepage pool to zero (i.e. writing 0 to > > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages), then > > - kill the reserving process. > > > > , then all the hugepages are free *and* surplus at the same time. > > > > $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > > 3 > > $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages > > 3 > > $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/resv_hugepages > > 0 > > $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/surplus_hugepages > > 3 > > > > This state is resolved by reserving and allocating the pages then > > freeing them again, so this seems not to result in serious problem. > > But it's a little surprising (shrinking pool suddenly fails). > > > > This behavior is caused by hstate_is_gigantic() check in > > return_unused_surplus_pages(). This was introduced so long ago in 2008 > > by commit aa888a74977a ("hugetlb: support larger than MAX_ORDER"), and > > at that time the gigantic pages were not supposed to be allocated/freed > > at run-time. Now kernel can support runtime allocation/free, so let's > > check gigantic_page_runtime_supported() together. > > > > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > > This patch looks good to me with a few question below. Thank you for reviewing. > > > --- > > v2 -> v3: > > - Fixed typo in patch description, > > - add !gigantic_page_runtime_supported() check instead of removing > > hstate_is_gigantic() check (suggested by Miaohe and Muchun) > > - add a few more !gigantic_page_runtime_supported() check in > > set_max_huge_pages() (by Mike). > > --- > > mm/hugetlb.c | 19 ++++++++++++++++--- > > 1 file changed, 16 insertions(+), 3 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 2a554f006255..bdc4499f324b 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -2432,8 +2432,7 @@ static void return_unused_surplus_pages(struct hstate *h, > > /* Uncommit the reservation */ > > h->resv_huge_pages -= unused_resv_pages; > > > > - /* Cannot return gigantic pages currently */ > > - if (hstate_is_gigantic(h)) > > + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > > goto out; > > > > /* > > @@ -3315,7 +3314,8 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, > > * the user tries to allocate gigantic pages but let the user free the > > * boottime allocated gigantic pages. > > */ > > - if (hstate_is_gigantic(h) && !IS_ENABLED(CONFIG_CONTIG_ALLOC)) { > > + if (hstate_is_gigantic(h) && (!IS_ENABLED(CONFIG_CONTIG_ALLOC) || > > + !gigantic_page_runtime_supported())) { > > if (count > persistent_huge_pages(h)) { > > spin_unlock_irq(&hugetlb_lock); > > mutex_unlock(&h->resize_lock); > > @@ -3363,6 +3363,19 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, > > goto out; > > } > > > > + /* > > + * We can not decrease gigantic pool size if runtime modification > > + * is not supported. > > + */ > > + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) { > > + if (count < persistent_huge_pages(h)) { > > + spin_unlock_irq(&hugetlb_lock); > > + mutex_unlock(&h->resize_lock); > > + NODEMASK_FREE(node_alloc_noretry); > > + return -EINVAL; > > + } > > + } > > With above change, we're not allowed to decrease the pool size now. But it was allowed previously > even if !gigantic_page_runtime_supported. Does this will break user? Yes, it does. I might get the wrong idea about the definition of gigantic_page_runtime_supported(), which shows that runtime pool *extension* is supported or not (implying that pool shrinking is always possible). If this is right, this new if-block is not necessary. > > And it seems it's not allowed to adjust the max_huge_pages now if !gigantic_page_runtime_supported > for gigantic huge page. Should we just return for such case as there should be nothing to do now? > Or am I miss something? If pool shrinking is always allowed, we need uptdate max_huge_pages so, the above if-block should have "goto out;", but it will be removed anyway so we don't have to care for it. Thank you for the valuable comment. - Naoya Horiguchi