No matter what context update_and_free_page() is called in, the flag for allocating the vmemmap page is fixed (GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE), and no atomic allocation is involved, so the description of atomicity here is somewhat inappropriate. and the atomic parameter naming of update_and_free_page() is somewhat misleading. Signed-off-by: luofei <luofei@xxxxxxxxxxxx> --- mm/hugetlb.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f8ca7cca3c1a..239ef82b7897 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1570,8 +1570,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * As update_and_free_page() can be called under any context, so we cannot - * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the - * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate + * use GFP_ATOMIC to allocate vmemmap pages. However, we can defer the + * actual freeing in a workqueue to prevent waits caused by allocating * the vmemmap pages. * * free_hpage_workfn() locklessly retrieves the linked list of pages to be @@ -1617,16 +1617,14 @@ static inline void flush_free_hpage_work(struct hstate *h) } static void update_and_free_page(struct hstate *h, struct page *page, - bool atomic) + bool delay) { - if (!HPageVmemmapOptimized(page) || !atomic) { + if (!HPageVmemmapOptimized(page) || !delay) { __update_and_free_page(h, page); return; } /* - * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap pages. - * * Only call schedule_work() if hpage_freelist is previously * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. -- 2.27.0