Re: [PATCH 1/2] arm64: mm: drop VM_FAULT_BADMAP/VM_FAULT_BADACCESS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/4/9 22:28, Catalin Marinas wrote:
Hi Kefeng,

On Sun, Apr 07, 2024 at 04:12:10PM +0800, Kefeng Wang wrote:
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 405f9aa831bd..61a2acae0dca 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -500,9 +500,6 @@ static bool is_write_abort(unsigned long esr)
  	return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM);
  }
-#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
-#define VM_FAULT_BADACCESS	((__force vm_fault_t)0x020000)
-
  static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
  				   struct pt_regs *regs)
  {
@@ -513,6 +510,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
  	unsigned int mm_flags = FAULT_FLAG_DEFAULT;
  	unsigned long addr = untagged_addr(far);
  	struct vm_area_struct *vma;
+	int si_code;

I think we should initialise this to 0. Currently all paths seem to set
si_code to something meaningful but I'm not sure the last 'else' close
in this patch is guaranteed to always cover exactly those earlier code
paths updating si_code. I'm not talking about the 'goto bad_area' paths
since they set 'fault' to 0 but the fall through after the second (under
the mm lock) handle_mm_fault().

Recheck it, without this patch, the second handle_mm_fault() never
return VM_FAULT_BADACCESS, but could return VM_FAULT_SIGSEGV(maybe
other), which not handled in the other error path,

 handle_mm_fault
    ret = sanitize_fault_flags(vma, &flags);
    if (!arch_vma_access_permitted())
	 ret = VM_FAULT_SIGSEGV;

so the orignal logical will set si_code to SEGV_MAPERR

  fault == VM_FAULT_BADACCESS ? SEGV_ACCERR : SEGV_MAPERR,

therefore, i think we should set the default si_code to SEGV_MAPERR.



  	if (kprobe_page_fault(regs, esr))
  		return 0;
@@ -572,9 +570,10 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
if (!(vma->vm_flags & vm_flags)) {
  		vma_end_read(vma);
-		fault = VM_FAULT_BADACCESS;
+		fault = 0;
+		si_code = SEGV_ACCERR;
  		count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
-		goto done;
+		goto bad_area;
  	}
  	fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
  	if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
@@ -599,15 +598,18 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
  retry:
  	vma = lock_mm_and_find_vma(mm, addr, regs);
  	if (unlikely(!vma)) {
-		fault = VM_FAULT_BADMAP;
-		goto done;
+		fault = 0;
+		si_code = SEGV_MAPERR;
+		goto bad_area;
  	}
- if (!(vma->vm_flags & vm_flags))
-		fault = VM_FAULT_BADACCESS;
-	else
-		fault = handle_mm_fault(vma, addr, mm_flags, regs);
+	if (!(vma->vm_flags & vm_flags)) {
+		fault = 0;
+		si_code = SEGV_ACCERR;
+		goto bad_area;
+	}

What's releasing the mm lock here? Prior to this change, it is falling
through to mmap_read_unlock() below or handle_mm_fault() was releasing
the lock (VM_FAULT_RETRY, VM_FAULT_COMPLETED).

Indeed, will fix,


+ fault = handle_mm_fault(vma, addr, mm_flags, regs);
  	/* Quick path to respond to signals */
  	if (fault_signal_pending(fault, regs)) {
  		if (!user_mode(regs))
@@ -626,13 +628,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
  	mmap_read_unlock(mm);
done:
-	/*
-	 * Handle the "normal" (no error) case first.
-	 */
-	if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP |
-			      VM_FAULT_BADACCESS))))
+	/* Handle the "normal" (no error) case first. */
+	if (likely(!(fault & VM_FAULT_ERROR)))
  		return 0;
+bad_area:
  	/*
  	 * If we are in kernel mode at this point, we have no context to
  	 * handle this fault with.
@@ -667,13 +667,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
arm64_force_sig_mceerr(BUS_MCEERR_AR, far, lsb, inf->name);
  	} else {
-		/*
-		 * Something tried to access memory that isn't in our memory
-		 * map.
-		 */
-		arm64_force_sig_fault(SIGSEGV,
-				      fault == VM_FAULT_BADACCESS ? SEGV_ACCERR : SEGV_MAPERR,
-				      far, inf->name);
+		/* Something tried to access memory that out of memory map */
+		arm64_force_sig_fault(SIGSEGV, si_code, far, inf->name);
  	}

We can get to the 'else' close after the second handle_mm_fault(). Do we
guarantee that 'fault == 0' in this last block? If not, maybe a warning
and some safe initialisation for 'si_code' to avoid leaking stack data.

As analyzed above, it is sufficient that make si_code to SEGV_MAPPER by default, right?

Thanks.







[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux