In set_max_huge_pages(), min_count should mean the acquired persistent huge pages, but it contains surplus huge pages. It will leads to failing to freeing free huge pages for a Node. Steps to reproduce: 1) create 5 huge pages in Node 0 2) run a program to use all the huge pages 3) create 5 huge pages in Node 1 4) echo 0 > nr_hugepages for Node 1 to free the huge pages The result: Node 0 Node 1 Total 5 5 Free 0 5 Surp 5 5 Fixes: 9a30523066cd ("hugetlb: add per node hstate attributes") Signed-off-by: Jinjiang Tu <tujinjiang@xxxxxxxxxx> --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 163190e89ea1..783faec7360b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3758,7 +3758,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, * and won't grow the pool anywhere else. Not until one of the * sysctls are changed, or the surplus pages go out of use. */ - min_count = h->resv_huge_pages + h->nr_huge_pages - h->free_huge_pages; + min_count = h->resv_huge_pages + persistent_huge_pages(h) - h->free_huge_pages; min_count = max(count, min_count); try_to_free_low(h, min_count, nodes_allowed); -- 2.43.0