Only unmap the page when the memory error is properly handled by calling memory_failure(), not the other way around. Fixes: 26f8c38bb466("x86/mm, mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages") Signed-off-by: luofei <luofei@xxxxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx #v4.14 --- arch/x86/kernel/cpu/mcheck/mce.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c index 95c09db1bba2..d8399a689165 100644 --- a/arch/x86/kernel/cpu/mcheck/mce.c +++ b/arch/x86/kernel/cpu/mcheck/mce.c @@ -589,7 +589,7 @@ static int srao_decode_notifier(struct notifier_block *nb, unsigned long val, if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) { pfn = mce->addr >> PAGE_SHIFT; - if (memory_failure(pfn, MCE_VECTOR, 0)) + if (!memory_failure(pfn, MCE_VECTOR, 0)) mce_unmap_kpfn(pfn); } -- 2.27.0