Use the new mm_fault_accounting() helper for page fault accounting. Avoid doing page fault accounting multiple times if the page fault is retried. Also, the perf events for page faults will be accounted too when the config has CONFIG_PERF_EVENTS defined. CC: Richard Henderson <rth@xxxxxxxxxxx> CC: Ivan Kokshaysky <ink@xxxxxxxxxxxxxxxxxxxx> CC: Matt Turner <mattst88@xxxxxxxxx> CC: linux-alpha@xxxxxxxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> --- arch/alpha/mm/fault.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index c2d7b6d7bac7..4f8632ddef25 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -88,7 +88,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, struct mm_struct *mm = current->mm; const struct exception_table_entry *fixup; int si_code = SEGV_MAPERR; - vm_fault_t fault; + vm_fault_t fault, major = 0; unsigned int flags = FAULT_FLAG_DEFAULT; /* As of EV6, a load into $31/$f31 is a prefetch, and never faults @@ -149,6 +149,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr, make sure we exit gracefully rather than endlessly redo the fault. */ fault = handle_mm_fault(vma, address, flags); + major |= fault & VM_FAULT_MAJOR; if (fault_signal_pending(fault, regs)) return; @@ -164,10 +165,6 @@ do_page_fault(unsigned long address, unsigned long mmcsr, } if (flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) - current->maj_flt++; - else - current->min_flt++; if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; @@ -182,6 +179,8 @@ do_page_fault(unsigned long address, unsigned long mmcsr, up_read(&mm->mmap_sem); + mm_fault_accounting(current, regs, address, major); + return; /* Something tried to access memory that isn't in our memory map. -- 2.26.2