在 2025/2/18 19:45, David Hildenbrand 写道:
On 18.02.25 12:40, yangge1116@xxxxxxx wrote:
From: Ge Yang <yangge1116@xxxxxxx>
Since the introduction of commit c77c0a8ac4c52 ("mm/hugetlb: defer
freeing
of huge pages if in non-task context"), which supports deferring the
freeing of hugetlb pages, the allocation of contiguous memory through
cma_alloc() may fail probabilistically.
In the CMA allocation process, if it is found that the CMA area is
occupied
by in-use hugetlb folios, these in-use hugetlb folios need to be migrated
to another location. When there are no available hugetlb folios in the
free hugetlb pool during the migration of in-use hugetlb folios, new
folios
are allocated from the buddy system. A temporary state is set on the
newly
allocated folio. Upon completion of the hugetlb folio migration, the
temporary state is transferred from the new folios to the old folios.
Normally, when the old folios with the temporary state are freed, it is
directly released back to the buddy system. However, due to the deferred
freeing of hugetlb pages, the PageBuddy() check fails, ultimately leading
to the failure of cma_alloc().
Here is a simplified call trace illustrating the process:
cma_alloc()
->__alloc_contig_migrate_range() // Migrate in-use hugetlb folios
->unmap_and_move_huge_page()
->folio_putback_hugetlb() // Free old folios
->test_pages_isolated()
->__test_page_isolated_in_pageblock()
->PageBuddy(page) // Check if the page is in buddy
To resolve this issue, we have implemented a function named
wait_for_freed_hugetlb_folios(). This function ensures that the hugetlb
folios are properly released back to the buddy system after their
migration
is completed. By invoking wait_for_freed_hugetlb_folios() before calling
PageBuddy(), we ensure that PageBuddy() will succeed.
Fixes: c77c0a8ac4c52 ("mm/hugetlb: defer freeing of huge pages if in
non-task context")
Signed-off-by: Ge Yang <yangge1116@xxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
+void wait_for_freed_hugetlb_folios(void)
+{
+ flush_work(&free_hpage_work);
BTW, I was wondering if we could optimize out some calls here by sensing
if there is actually work.
for_each_hstate(h) {
if (hugetlb_vmemmap_optimizable(h)) {
flush_work(&free_hpage_work);
break;
}
}
Is this adjustment okay?
In most cases, we'll never ever have to actually wait here, especially
on systems without any configured hugetlb pages etc ...
It's rather a corner case that we have to wait here on most systems.