The patch titled Subject: mm: hwpoison: support recovery from ksm_might_need_to_copy() has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-hwposion-support-recovery-from-ksm_might_need_to_copy.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-hwposion-support-recovery-from-ksm_might_need_to_copy.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm: hwpoison: support recovery from ksm_might_need_to_copy() Date: Fri, 9 Dec 2022 15:28:01 +0800 When the kernel copies a page in ksm_might_need_to_copy(), but runs into an uncorrectable error, it will crash since the poisoned page is consumed by kernel. This is similar to Copy-on-write poison recovery. When an error is detected during the page copy, return VM_FAULT_HWPOISON, which helps us to avoid the system crash. Note, memory failure on a KSM page will be skipped, but still call memory_failure_queue() to be consistent with general memory failure processes. Link: https://lkml.kernel.org/r/20221209072801.193221-1-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> Cc: Tony Luck <tony.luck@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/ksm.c | 8 ++++++-- mm/memory.c | 3 +++ mm/swapfile.c | 2 +- 3 files changed, 10 insertions(+), 3 deletions(-) --- a/mm/ksm.c~mm-hwposion-support-recovery-from-ksm_might_need_to_copy +++ a/mm/ksm.c @@ -2602,8 +2602,12 @@ struct page *ksm_might_need_to_copy(stru new_page = NULL; } if (new_page) { - copy_user_highpage(new_page, page, address, vma); - + if (copy_mc_user_highpage(new_page, page, address, vma)) { + put_page(new_page); + new_page = ERR_PTR(-EHWPOISON); + memory_failure_queue(page_to_pfn(page), 0); + return new_page; + } SetPageDirty(new_page); __SetPageUptodate(new_page); __SetPageLocked(new_page); --- a/mm/memory.c~mm-hwposion-support-recovery-from-ksm_might_need_to_copy +++ a/mm/memory.c @@ -3878,6 +3878,9 @@ vm_fault_t do_swap_page(struct vm_fault if (unlikely(!page)) { ret = VM_FAULT_OOM; goto out_page; + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) { + ret = VM_FAULT_HWPOISON; + goto out_page; } folio = page_folio(page); --- a/mm/swapfile.c~mm-hwposion-support-recovery-from-ksm_might_need_to_copy +++ a/mm/swapfile.c @@ -1768,7 +1768,7 @@ static int unuse_pte(struct vm_area_stru swapcache = page; page = ksm_might_need_to_copy(page, vma, addr); - if (unlikely(!page)) + if (IS_ERR_OR_NULL(page)) return -ENOMEM; pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-hwposion-support-recovery-from-ksm_might_need_to_copy.patch mm-add-cond_resched-in-swapin_walk_pmd_entry.patch