Re: [PATCH 1/2] arm64: mm: drop VM_FAULT_BADMAP/VM_FAULT_BADACCESS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 10, 2024 at 06:58:27PM +0800, Kefeng Wang wrote:
> On 2024/4/10 9:30, Kefeng Wang wrote:
> > On 2024/4/9 22:28, Catalin Marinas wrote:
> > > On Sun, Apr 07, 2024 at 04:12:10PM +0800, Kefeng Wang wrote:
> > > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > > > index 405f9aa831bd..61a2acae0dca 100644
> > > > --- a/arch/arm64/mm/fault.c
> > > > +++ b/arch/arm64/mm/fault.c
> > > > @@ -500,9 +500,6 @@ static bool is_write_abort(unsigned long esr)
> > > >       return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM);
> > > >   }
> > > > -#define VM_FAULT_BADMAP        ((__force vm_fault_t)0x010000)
> > > > -#define VM_FAULT_BADACCESS    ((__force vm_fault_t)0x020000)
> > > > -
> > > >   static int __kprobes do_page_fault(unsigned long far, unsigned
> > > > long esr,
> > > >                      struct pt_regs *regs)
> > > >   {
> > > > @@ -513,6 +510,7 @@ static int __kprobes do_page_fault(unsigned
> > > > long far, unsigned long esr,
> > > >       unsigned int mm_flags = FAULT_FLAG_DEFAULT;
> > > >       unsigned long addr = untagged_addr(far);
> > > >       struct vm_area_struct *vma;
> > > > +    int si_code;
> > > 
> > > I think we should initialise this to 0. Currently all paths seem to set
> > > si_code to something meaningful but I'm not sure the last 'else' close
> > > in this patch is guaranteed to always cover exactly those earlier code
> > > paths updating si_code. I'm not talking about the 'goto bad_area' paths
> > > since they set 'fault' to 0 but the fall through after the second (under
> > > the mm lock) handle_mm_fault().
[...]
> > > > +    fault = handle_mm_fault(vma, addr, mm_flags, regs);
> > > >       /* Quick path to respond to signals */
> > > >       if (fault_signal_pending(fault, regs)) {
> > > >           if (!user_mode(regs))
> > > > @@ -626,13 +628,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
> > > >       mmap_read_unlock(mm);
> > > >   done:
> > > > -    /*
> > > > -     * Handle the "normal" (no error) case first.
> > > > -     */
> > > > -    if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP |
> > > > -                  VM_FAULT_BADACCESS))))
> > > > +    /* Handle the "normal" (no error) case first. */
> > > > +    if (likely(!(fault & VM_FAULT_ERROR)))
> > > >           return 0;
> 
> Another choice, we set si_code = SEGV_MAPERR here, since normal
> pagefault don't use si_code, only the error patch need to initialize.

Yes, I think initialising it here would be fine. That's the fall-through
case I was concerned about. All the other goto bad_area places already
initialise si_code.

-- 
Catalin




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux