From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Try a speculative fault before acquiring mmap_sem, if it returns with VM_FAULT_RETRY continue with the mmap_sem acquisition and do the traditional fault. Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> [Clearing of FAULT_FLAG_ALLOW_RETRY is now done in handle_speculative_fault()] [Retry with usual fault path in the case VM_ERROR is returned by handle_speculative_fault(). This allows signal to be delivered] [Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT] [Handle memory protection key fault] Signed-off-by: Laurent Dufour <ldufour@xxxxxxxxxxxxx> --- arch/x86/mm/fault.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 667f1da36208..4390d207a7a1 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1401,6 +1401,18 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif + /* + * Do not try to do a speculative page fault if the fault was due to + * protection keys since it can't be resolved. + */ + if (!(hw_error_code & X86_PF_PK)) { + fault = handle_speculative_fault(mm, address, flags); + if (fault != VM_FAULT_RETRY) { + perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address); + goto done; + } + } + /* * Kernel-mode access to the user address space should only occur * on well-defined single instructions listed in the exception @@ -1499,6 +1511,8 @@ void do_user_addr_fault(struct pt_regs *regs, } up_read(&mm->mmap_sem); + +done: if (unlikely(fault & VM_FAULT_ERROR)) { mm_fault_error(regs, hw_error_code, address, fault); return; -- 2.21.0