On Tue, Apr 2, 2024 at 12:53 AM Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> wrote: > > The vm_flags of vma already checked under per-VMA lock, if it is a > bad access, directly handle error and return, there is no need to > lock_mm_and_find_vma() and check vm_flags again. > > Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > --- > arch/riscv/mm/fault.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c > index 3ba1d4dde5dd..b3fcf7d67efb 100644 > --- a/arch/riscv/mm/fault.c > +++ b/arch/riscv/mm/fault.c > @@ -292,7 +292,10 @@ void handle_page_fault(struct pt_regs *regs) > > if (unlikely(access_error(cause, vma))) { > vma_end_read(vma); > - goto lock_mmap; > + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > + tsk->thread.bad_cause = SEGV_ACCERR; > + bad_area_nosemaphore(regs, code, addr); > + return; > } > > fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs); > -- > 2.27.0 >