Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, [+Peter]

Apologies for the delay; I'm attending a conference this week so it's tricky
to keep up with email.

On Wed, May 08, 2019 at 05:34:49AM +0800, Yang Shi wrote:
> A few new fields were added to mmu_gather to make TLB flush smarter for
> huge page by telling what level of page table is changed.
> 
> __tlb_reset_range() is used to reset all these page table state to
> unchanged, which is called by TLB flush for parallel mapping changes for
> the same range under non-exclusive lock (i.e. read mmap_sem).  Before
> commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
> munmap"), MADV_DONTNEED is the only one who may do page zapping in
> parallel and it doesn't remove page tables.  But, the forementioned commit
> may do munmap() under read mmap_sem and free page tables.  This causes a
> bug [1] reported by Jan Stancek since __tlb_reset_range() may pass the
> wrong page table state to architecture specific TLB flush operations.

Yikes. Is it actually safe to run free_pgtables() concurrently for a given
mm?

> So, removing __tlb_reset_range() sounds sane.  This may cause more TLB
> flush for MADV_DONTNEED, but it should be not called very often, hence
> the impact should be negligible.
> 
> The original proposed fix came from Jan Stancek who mainly debugged this
> issue, I just wrapped up everything together.

I'm still paging the nested flush logic back in, but I have some comments on
the patch below.

> [1] https://lore.kernel.org/linux-mm/342bf1fd-f1bf-ed62-1127-e911b5032274@xxxxxxxxxxxxxxxxx/T/#m7a2ab6c878d5a256560650e56189cfae4e73217f
> 
> Reported-by: Jan Stancek <jstancek@xxxxxxxxxx>
> Tested-by: Jan Stancek <jstancek@xxxxxxxxxx>
> Cc: Will Deacon <will.deacon@xxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
> Signed-off-by: Jan Stancek <jstancek@xxxxxxxxxx>
> ---
>  mm/mmu_gather.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 99740e1..9fd5272 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -249,11 +249,12 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
>  	 * flush by batching, a thread has stable TLB entry can fail to flush

Urgh, we should rewrite this comment while we're here so that it makes sense...

>  	 * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
>  	 * forcefully if we detect parallel PTE batching threads.
> +	 *
> +	 * munmap() may change mapping under non-excluse lock and also free
> +	 * page tables.  Do not call __tlb_reset_range() for it.
>  	 */
> -	if (mm_tlb_flush_nested(tlb->mm)) {
> -		__tlb_reset_range(tlb);
> +	if (mm_tlb_flush_nested(tlb->mm))
>  		__tlb_adjust_range(tlb, start, end - start);
> -	}

I don't think we can elide the call __tlb_reset_range() entirely, since I
think we do want to clear the freed_pXX bits to ensure that we walk the
range with the smallest mapping granule that we have. Otherwise couldn't we
have a problem if we hit a PMD that had been cleared, but the TLB
invalidation for the PTEs that used to be linked below it was still pending?

Perhaps we should just set fullmm if we see that here's a concurrent
unmapper rather than do a worst-case range invalidation. Do you have a feeling
for often the mm_tlb_flush_nested() triggers in practice?

Will



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux