From: luofei <luofei@xxxxxxxxxxxx> When fd0e786d9d09 ("x86/mm, mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages") was backported to 4.14.y, the logic was reversed when calling memory_failure() to determine whether it needs to unmap the kernel page. Only when memory_failure() returns successfully, the kernel page can be unmapped. Signed-off-by: luofei <luofei@xxxxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx #v4.14.x Cc: stable@xxxxxxxxxxxxxxx #v4.15.x Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/kernel/cpu/mcheck/mce.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -589,7 +589,7 @@ static int srao_decode_notifier(struct n if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) { pfn = mce->addr >> PAGE_SHIFT; - if (memory_failure(pfn, MCE_VECTOR, 0)) + if (!memory_failure(pfn, MCE_VECTOR, 0)) mce_unmap_kpfn(pfn); }