On Wed, 11 Dec 2019, Mike Kravetz wrote:
The workqueue approach would address both soft and hard irq context issues. As a result, I too think this is the approach we should explore. Since there is more than one lock involved, this also is reason for a work queue approach. I'll take a look at initial workqueue implementation. However, I have not dealt with workqueues in some time so it may take few days to evaluate.
I'm thinking of something like the following; it at least passes all ltp hugetlb related testcases. Thanks, Davidlohr ----8<------------------------------------------------------------------ [PATCH] mm/hugetlb: defer free_huge_page() to a workqueue There have been deadlock reports[1, 2] where put_page is called from softirq context and this causes trouble with the hugetlb_lock, as well as potentially the subpool lock. For such an unlikely scenario, lets not add irq dancing overhead to the lock+unlock operations, which could incur in expensive instruction dependencies, particularly when considering hard-irq safety. For example PUSHF+POPF on x86. Instead, just use a workqueue and do the free_huge_page() in regular task context. [1] https://lore.kernel.org/lkml/20191211194615.18502-1-longman@xxxxxxxxxx/ [2] https://lore.kernel.org/lkml/20180905112341.21355-1-aneesh.kumar@xxxxxxxxxxxxx/ Signed-off-by: Davidlohr Bueso <dbueso@xxxxxxx> --- mm/hugetlb.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ac65bb5e38ac..737108d8d637 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1136,8 +1136,17 @@ static inline void ClearPageHugeTemporary(struct page *page) page[2].mapping = NULL; } -void free_huge_page(struct page *page) +static struct workqueue_struct *hugetlb_free_page_wq; +struct hugetlb_free_page_work { + struct page *page; + struct work_struct work; +}; + +static void free_huge_page_workfn(struct work_struct *work) { + struct page *page = container_of(work, + struct hugetlb_free_page_work, + work)->page; /* * Can't pass hstate in here because it is called from the * compound page destructor. @@ -1197,6 +1206,27 @@ void free_huge_page(struct page *page) enqueue_huge_page(h, page); } spin_unlock(&hugetlb_lock); + +} + +/* + * While unlikely, free_huge_page() can be at least called from + * softirq context, defer freeing such that the hugetlb_lock and + * spool->lock need not have to deal with irq dances just for this. + */ +void free_huge_page(struct page *page) +{ + struct hugetlb_free_page_work work; + + work.page = page; + INIT_WORK_ONSTACK(&work.work, free_huge_page_workfn); + queue_work(hugetlb_free_page_wq, &work.work); + + /* + * Wait until free_huge_page is done. + */ + flush_work(&work.work); + destroy_work_on_stack(&work.work); } static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) @@ -2816,6 +2846,12 @@ static int __init hugetlb_init(void) for (i = 0; i < num_fault_mutexes; i++) mutex_init(&hugetlb_fault_mutex_table[i]); + + hugetlb_free_page_wq = alloc_workqueue("hugetlb_free_page_wq", + WQ_MEM_RECLAIM, 0); + if (!hugetlb_free_page_wq) + return -ENOMEM; + return 0; } subsys_initcall(hugetlb_init); -- 2.16.4