Re: [PATCH v3 2/2] mm, drm/ttm: Fix vm page protection handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 06-12-19 09:24:26, Thomas Hellström (VMware) wrote:
[...]
> @@ -283,11 +282,26 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>  			pfn = page_to_pfn(page);
>  		}
>  
> +		/*
> +		 * Note that the value of @prot at this point may differ from
> +		 * the value of @vma->vm_page_prot in the caching- and
> +		 * encryption bits. This is because the exact location of the
> +		 * data may not be known at mmap() time and may also change
> +		 * at arbitrary times while the data is mmap'ed.
> +		 * This is ok as long as @vma->vm_page_prot is not used by
> +		 * the core vm to set caching- and encryption bits.
> +		 * This is ensured by core vm using pte_modify() to modify
> +		 * page table entry protection bits (that function preserves
> +		 * old caching- and encryption bits), and the @fault
> +		 * callback being the only function that creates new
> +		 * page table entries.
> +		 */

While this is a very valuable piece of information I believe we need to
document this in the generic code where everybody will find it.
vmf_insert_mixed_prot sounds like a good place to me. So being explicit
about VM_MIXEDMAP. Also a reference from vm_page_prot to this function
would be really helpeful.

Thanks!

-- 
Michal Hocko
SUSE Labs





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux