This effort is the result a recent bug report [1]. In subsequent discussions [2], it was deemed necessary to properly fix the hugetlb put_page path (free_huge_page). This RFC provides a possible way to address the issue. Comments are welcome/encouraged as several attempts at this have been made in the past. This series is based on v5.12-rc3-mmotm-2021-03-17-22-24. At a high level, the series provides: - Patches 1 & 2 from Roman Gushchin provide cma_release_nowait() - Patches 4, 5 & 6 are aimed at reducing lock hold times. To be clear the goal is to eliminate single lock hold times of a long duration. Overall lock hold time is not addressed. - Patch 7 makes hugetlb_lock and subpool lock IRQ safe. It also reverts the code which defers calls to a workqueue if !in_task. - Patch 8 adds some lockdep_assert_held() calls [1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43aadc@xxxxxxxxxx/ [2] http://lkml.kernel.org/r/20210311021321.127500-1-mike.kravetz@xxxxxxxxxx RFC -> v1 - Add Roman's cma_release_nowait() patches. This eliminated the need to do a workqueue handoff in hugetlb code. - Use Michal's suggestion to batch pages for freeing. This eliminated the need to recalculate loop control variables when dropping the lock. - Added lockdep_assert_held() calls - Rebased to v5.12-rc3-mmotm-2021-03-17-22-24 Mike Kravetz (6): hugetlb: add per-hstate mutex to synchronize user adjustments hugetlb: create remove_hugetlb_page() to separate functionality hugetlb: call update_and_free_page without hugetlb_lock hugetlb: change free_pool_huge_page to remove_pool_huge_page hugetlb: make free_huge_page irq safe hugetlb: add lockdep_assert_held() calls for hugetlb_lock Roman Gushchin (2): mm: cma: introduce cma_release_nowait() mm: hugetlb: don't drop hugetlb_lock around cma_release() call include/linux/cma.h | 2 + include/linux/hugetlb.h | 1 + mm/cma.c | 93 +++++++++++ mm/cma.h | 5 + mm/hugetlb.c | 354 +++++++++++++++++++++------------------- mm/hugetlb_cgroup.c | 8 +- 6 files changed, 294 insertions(+), 169 deletions(-) -- 2.30.2