Hi, Mike,
On 10/18/2023 7:31 PM, Mike Kravetz wrote:
From: Joao Martins <joao.m.martins@xxxxxxxxxx>
Now that a list of pages is deduplicated at once, the TLB
flush can be batched for all vmemmap pages that got remapped.
[..]
@@ -719,19 +737,28 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
list_for_each_entry(folio, folio_list, lru) {
int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
- &vmemmap_pages);
+ &vmemmap_pages,
+ VMEMMAP_REMAP_NO_TLB_FLUSH);
/*
* Pages to be freed may have been accumulated. If we
* encounter an ENOMEM, free what we have and try again.
+ * This can occur in the case that both spliting fails
+ * halfway and head page allocation also failed. In this
+ * case __hugetlb_vmemmap_optimize() would free memory
+ * allowing more vmemmap remaps to occur.
*/
if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) {
+ flush_tlb_all();
free_vmemmap_page_list(&vmemmap_pages);
INIT_LIST_HEAD(&vmemmap_pages);
- __hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
+ __hugetlb_vmemmap_optimize(h, &folio->page,
+ &vmemmap_pages,
+ VMEMMAP_REMAP_NO_TLB_FLUSH);
}
}
+ flush_tlb_all();
It seems that if folio_list is empty, we could spend a tlb flush here.
perhaps it's worth to check against empty list up front and return ?
thanks,
-jane
free_vmemmap_page_list(&vmemmap_pages);
}