在 2025/2/26 23:57, David Hildenbrand 写道:
On 25.02.25 15:19, Jinjiang Tu wrote:
In set_max_huge_pages(), min_count should mean the acquired persistent
huge pages, but it contains surplus huge pages. It will leads to failing
s/leads/lead/
to freeing free huge pages for a Node.
Steps to reproduce:
1) create 5 huge pages in Node 0
2) run a program to use all the huge pages
3) create 5 huge pages in Node 1
4) echo 0 > nr_hugepages for Node 1 to free the huge pages
The result:
Node 0 Node 1
Total 5 5
Free 0 5
Surp 5 5
Can you also share the results after your change?
With this patch, step 4) destroys the 5 huge pages in Node 1
The result with this patch:
Node 0 Node 1
Total 5 0
Free 0 0
Surp 5 0
Fixes: 9a30523066cd ("hugetlb: add per node hstate attributes")
Signed-off-by: Jinjiang Tu <tujinjiang@xxxxxxxxxx>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 163190e89ea1..783faec7360b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3758,7 +3758,7 @@ static int set_max_huge_pages(struct hstate *h,
unsigned long count, int nid,
* and won't grow the pool anywhere else. Not until one of the
* sysctls are changed, or the surplus pages go out of use.
*/
- min_count = h->resv_huge_pages + h->nr_huge_pages -
h->free_huge_pages;
+ min_count = h->resv_huge_pages + persistent_huge_pages(h) -
h->free_huge_pages;
min_count = max(count, min_count);
try_to_free_low(h, min_count, nodes_allowed);