Re: [PATCH v4] mm: improve mprotect(R|W) efficiency on pages referenced once

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 27 May 2021 12:04:53 -0700 Peter Collingbourne <pcc@xxxxxxxxxx> wrote:

> In the Scudo memory allocator [1] we would like to be able to
> detect use-after-free vulnerabilities involving large allocations
> by issuing mprotect(PROT_NONE) on the memory region used for the
> allocation when it is deallocated. Later on, after the memory
> region has been "quarantined" for a sufficient period of time we
> would like to be able to use it for another allocation by issuing
> mprotect(PROT_READ|PROT_WRITE).
>
> ...
>
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -35,6 +35,29 @@
>  
>  #include "internal.h"
>  
> +static bool may_avoid_write_fault(pte_t pte, struct vm_area_struct *vma,
> +				  unsigned long cp_flags)

Some comments would be nice, and the function is ideally structured to
explain each test.  "why" we're testing these things, not "what" we're testing.


> +{

	/* here */
	
> +	if (!(cp_flags & MM_CP_DIRTY_ACCT)) {
> +		if (!(vma_is_anonymous(vma) && (vma->vm_flags & VM_WRITE)))
> +			return false;
> +
> +		if (page_count(pte_page(pte)) != 1)
> +			return false;
> +	}
> +

	/* and here */

> +	if (!pte_dirty(pte))
> +		return false;
> +

	/* and here */

> +	if (!pte_soft_dirty(pte) && (vma->vm_flags & VM_SOFTDIRTY))
> +		return false;

	/* and here */

> +	if (pte_uffd_wp(pte))
> +		return false;
> +
> +	return true;
> +}


>  static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  		unsigned long addr, unsigned long end, pgprot_t newprot,
>  		unsigned long cp_flags)
> @@ -43,7 +66,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  	spinlock_t *ptl;
>  	unsigned long pages = 0;
>  	int target_node = NUMA_NO_NODE;
> -	bool dirty_accountable = cp_flags & MM_CP_DIRTY_ACCT;
>  	bool prot_numa = cp_flags & MM_CP_PROT_NUMA;
>  	bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
>  	bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
> @@ -132,11 +154,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  			}
>  
>  			/* Avoid taking write faults for known dirty pages */

And this comment could be moved to may_avoid_write_fault()'s
explanation.

> -			if (dirty_accountable && pte_dirty(ptent) &&
> -					(pte_soft_dirty(ptent) ||
> -					 !(vma->vm_flags & VM_SOFTDIRTY))) {
> +			if (may_avoid_write_fault(ptent, vma, cp_flags))
>  				ptent = pte_mkwrite(ptent);
> -			}
>  			ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent);
>  			pages++;
>  		} else if (is_swap_pte(oldpte)) {
> -- 
> 2.32.0.rc0.204.g9fa02ecfa5-goog




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux