The quilt patch titled Subject: mm-memory-failure-fix-deadlock-when-hugetlb_optimize_vmemmap-is-enabled-v2-fix has been removed from the -mm tree. Its filename was mm-memory-failure-fix-deadlock-when-hugetlb_optimize_vmemmap-is-enabled-v2-fix.patch This patch was dropped because it was folded into mm-memory-failure-fix-deadlock-when-hugetlb_optimize_vmemmap-is-enabled.patch ------------------------------------------------------ From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Subject: mm-memory-failure-fix-deadlock-when-hugetlb_optimize_vmemmap-is-enabled-v2-fix Date: Fri Apr 12 04:18:11 PM PDT 2024 reflow block comment Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Oscar Salvador <osalvador@xxxxxxx> Cc: Naoya Horiguchi <nao.horiguchi@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory-failure.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) --- a/mm/memory-failure.c~mm-memory-failure-fix-deadlock-when-hugetlb_optimize_vmemmap-is-enabled-v2-fix +++ a/mm/memory-failure.c @@ -155,14 +155,16 @@ static int __page_handle_poison(struct p int ret; /* - * zone_pcp_disable() can't be used here. It will hold pcp_batch_high_lock and - * dissolve_free_huge_page() might hold cpu_hotplug_lock via static_key_slow_dec() - * when hugetlb vmemmap optimization is enabled. This will break current lock - * dependency chain and leads to deadlock. - * Disabling pcp before dissolving the page was a deterministic approach because - * we made sure that those pages cannot end up in any PCP list. Draining PCP lists - * expels those pages to the buddy system, but nothing guarantees that those pages - * do not get back to a PCP queue if we need to refill those. + * zone_pcp_disable() can't be used here. It will + * hold pcp_batch_high_lock and dissolve_free_huge_page() might hold + * cpu_hotplug_lock via static_key_slow_dec() when hugetlb vmemmap + * optimization is enabled. This will break current lock dependency + * chain and leads to deadlock. + * Disabling pcp before dissolving the page was a deterministic + * approach because we made sure that those pages cannot end up in any + * PCP list. Draining PCP lists expels those pages to the buddy system, + * but nothing guarantees that those pages do not get back to a PCP + * queue if we need to refill those. */ ret = dissolve_free_huge_page(page); if (!ret) { _ Patches currently in -mm which might be from akpm@xxxxxxxxxxxxxxxxxxxx are mm-memory-failure-fix-deadlock-when-hugetlb_optimize_vmemmap-is-enabled.patch bootconfig-use-memblock_free_late-to-free-xbc-memory-to-buddy-fix.patch selftests-harness-remove-use-of-line_max-fix.patch selftests-harness-remove-use-of-line_max-fix-fix.patch mm-sparc-change-pxd_huge-behavior-to-exclude-swap-entries-fix.patch mm-hold-ptl-from-the-first-pte-while-reclaiming-a-large-folio-fix.patch sh-remove-use-of-pg_arch_1-on-individual-pages-fix.patch mm-gup-drop-folio_fast_pin_allowed-in-hugepd-processing-fix.patch mm-allow-anon-exclusive-check-over-hugetlb-tail-pages-fix.patch arm-mm-drop-vm_fault_badmap-vm_fault_badaccess-checkpatch-fixes.patch mm-hugetlb-rename-dissolve_free_huge_pages-to-dissolve_free_hugetlb_folios-fix.patch __mod_memcg_lruvec_state-enhance-diagnostics.patch __mod_memcg_lruvec_state-enhance-diagnostics-fix.patch