Re: [PATCH] MIPS: make userspace mapping young by default

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



‎I am ok with it


  Original Message  
From: Nicholas Piggin
Sent: 2021年2月4日星期四 11:55
To: ambrosehua@xxxxxxxxx; Huang Pei; Thomas Bogendoerfer
Cc: Andrew Morton; Huacai Chen; Gao Juxin; Jiaxun Yang; linux-arch@xxxxxxxxxxxxxxx; linux-mips@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; Li Xuefeng; Bibo Mao; Paul Burton; Yang Tiezhu; Fuxin Zhang
Subject: Re: [PATCH] MIPS: make userspace mapping young by default

Excerpts from Huang Pei's message of February 4, 2021 11:39 am:
> MIPS page fault path(except huge page) takes 3 exceptions (1 TLB Miss
> + 2 TLB Invalid), butthe second TLB Invalid exception is just
> triggered by __update_tlb from do_page_fault writing tlb without
> _PAGE_VALID set. With this patch, user space mapping prot is made
> young by default (with both _PAGE_VALID and _PAGE_YOUNG set),
> and it only take 1 TLB Miss + 1 TLB Invalid exception
> 
> Remove pte_sw_mkyoung without polluting MM code and make page fault
> delay of MIPS on par with other architecture
> 
> Signed-off-by: Huang Pei <huangpei@xxxxxxxxxxx>

Could we merge this? For the core code,

Reviewed-by: Nicholas Piggin <npiggin@xxxxxxxxx>

> ---
> arch/mips/mm/cache.c | 30 ++++++++++++++++--------------
> include/linux/pgtable.h | 8 --------
> mm/memory.c | 3 ---
> 3 files changed, 16 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
> index 23b16bfd97b2..e19cf424bb39 100644
> --- a/arch/mips/mm/cache.c
> +++ b/arch/mips/mm/cache.c
> @@ -156,29 +156,31 @@ unsigned long _page_cachable_default;
> EXPORT_SYMBOL(_page_cachable_default);
> 
> #define PM(p)	__pgprot(_page_cachable_default | (p))
> +#define PVA(p)	PM(_PAGE_VALID | _PAGE_ACCESSED | (p))
> 
> static inline void setup_protection_map(void)
> {
> protection_map[0] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
> -	protection_map[1] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
> -	protection_map[2] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
> -	protection_map[3] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
> -	protection_map[4] = PM(_PAGE_PRESENT);
> -	protection_map[5] = PM(_PAGE_PRESENT);
> -	protection_map[6] = PM(_PAGE_PRESENT);
> -	protection_map[7] = PM(_PAGE_PRESENT);
> +	protection_map[1] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC);
> +	protection_map[2] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
> +	protection_map[3] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC);
> +	protection_map[4] = PVA(_PAGE_PRESENT);
> +	protection_map[5] = PVA(_PAGE_PRESENT);
> +	protection_map[6] = PVA(_PAGE_PRESENT);
> +	protection_map[7] = PVA(_PAGE_PRESENT);
> 
> protection_map[8] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ);
> -	protection_map[9] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC);
> -	protection_map[10] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE |
> +	protection_map[9] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC);
> +	protection_map[10] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE |
> _PAGE_NO_READ);
> -	protection_map[11] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE);
> -	protection_map[12] = PM(_PAGE_PRESENT);
> -	protection_map[13] = PM(_PAGE_PRESENT);
> -	protection_map[14] = PM(_PAGE_PRESENT | _PAGE_WRITE);
> -	protection_map[15] = PM(_PAGE_PRESENT | _PAGE_WRITE);
> +	protection_map[11] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE);
> +	protection_map[12] = PVA(_PAGE_PRESENT);
> +	protection_map[13] = PVA(_PAGE_PRESENT);
> +	protection_map[14] = PVA(_PAGE_PRESENT);
> +	protection_map[15] = PVA(_PAGE_PRESENT);
> }
> 
> +#undef _PVA
> #undef PM
> 
> void cpu_cache_init(void)
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 8fcdfa52eb4b..8c042627399a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -432,14 +432,6 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
> * To be differentiate with macro pte_mkyoung, this macro is used on platforms
> * where software maintains page access bit.
> */
> -#ifndef pte_sw_mkyoung
> -static inline pte_t pte_sw_mkyoung(pte_t pte)
> -{
> -	return pte;
> -}
> -#define pte_sw_mkyoung	pte_sw_mkyoung
> -#endif
> -
> #ifndef pte_savedwrite
> #define pte_savedwrite pte_write
> #endif
> diff --git a/mm/memory.c b/mm/memory.c
> index feff48e1465a..95718a623884 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2890,7 +2890,6 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> }
> flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
> entry = mk_pte(new_page, vma->vm_page_prot);
> -	 entry = pte_sw_mkyoung(entry);
> entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> 
> /*
> @@ -3548,7 +3547,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> __SetPageUptodate(page);
> 
> entry = mk_pte(page, vma->vm_page_prot);
> -	entry = pte_sw_mkyoung(entry);
> if (vma->vm_flags & VM_WRITE)
> entry = pte_mkwrite(pte_mkdirty(entry));
> 
> @@ -3824,7 +3822,6 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page)
> 
> flush_icache_page(vma, page);
> entry = mk_pte(page, vma->vm_page_prot);
> -	entry = pte_sw_mkyoung(entry);
> if (write)
> entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> /* copy-on-write page */
> -- 
> 2.17.1
> 
> 
> 




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux