Re: [PATCH v3] x86/sgx: Fix sgx_encl_may_map locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 05, 2020 at 05:11:19PM +0300, Jarkko Sakkinen wrote:
> Fix the issue further discussed in:

No, this is still utter crap.  Just use the version I sent.

> 1. https://lore.kernel.org/linux-sgx/op.0rwbv916wjvjmi@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
> 2. https://lore.kernel.org/linux-sgx/20201003195440.GD20115@xxxxxxxxxxxxxxxxxxxx/
> 
> Reported-by: Haitao Huang <haitao.huang@xxxxxxxxxxxxxxx>
> Suggested-by: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> Cc: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> Cc: Jethro Beekman <jethro@xxxxxxxxxxxx>
> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@xxxxxxxxxxxxxxx>
> ---
> v3:
> * Added the missing unlock pointed out by Matthew.
> * Tested with the correct patch (last time had v1 applied)
> * I don't know what happened to v2 changelog, checked from patchwork
>   and it wasn't there. Hope this is not scraped.
>  arch/x86/kernel/cpu/sgx/encl.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 4c6407cd857a..e91e521b03a8 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -307,6 +307,8 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
>  	unsigned long idx_start = PFN_DOWN(start);
>  	unsigned long idx_end = PFN_DOWN(end - 1);
>  	struct sgx_encl_page *page;
> +	unsigned long count = 0;
> +	int ret = 0;
>  
>  	XA_STATE(xas, &encl->page_array, idx_start);
>  
> @@ -317,11 +319,30 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
>  	if (current->personality & READ_IMPLIES_EXEC)
>  		return -EACCES;
>  
> -	xas_for_each(&xas, page, idx_end)
> -		if (!page || (~page->vm_max_prot_bits & vm_prot_bits))
> -			return -EACCES;
> +	/*
> +	 * No need to hold encl->lock:
> +	 * 1. None of the page->* get written.
> +	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
> +	 *    is before calling xa_insert(). After that it is never modified.
> +	 */
> +	xas_lock(&xas);
> +	xas_for_each(&xas, page, idx_end) {
> +		if (++count % XA_CHECK_SCHED)
> +			continue;
>  
> -	return 0;
> +		if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) {
> +			ret = -EACCES;
> +			break;
> +		}
> +
> +		xas_pause(&xas);
> +		xas_unlock(&xas);
> +		cond_resched();
> +		xas_lock(&xas);
> +	}
> +	xas_unlock(&xas);
> +
> +	return ret;
>  }
>  
>  static int sgx_vma_mprotect(struct vm_area_struct *vma,
> -- 
> 2.25.1
> 



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux