Re: Dirty/Access bits vs. page content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From 21819f790e3d206ad77cd20d6e7cae86311fc87d Mon Sep 17 00:00:00 2001
> From: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Date: Mon, 21 Apr 2014 15:29:49 -0700
> Subject: [PATCH 1/2] mm: move page table dirty state into TLB gather operation
> 
> When tearing down a memory mapping, we have long delayed the actual
> freeing of the pages until after the (batched) TLB flush, since only
> after the TLB entries have been flushed from all CPU's do we know that
> none of the pages will be accessed any more.
> 
> HOWEVER.
> 
> Ben Herrenschmidt points out that we need to do the same thing for
> marking a shared mapped page dirty.  Because if we mark the underlying
> page dirty before we have flushed the TLB's, other CPU's may happily
> continue to write to the page (using their stale TLB contents) after
> we've marked the page dirty, and they can thus race with any cleaning
> operation.
> 
> Now, in practice, any page cleaning operations will take much longer to
> start the IO on the page than it will have taken us to get to the TLB
> flush, so this is going to be hard to trigger in real life.  In fact, so
> far nobody has even come up with a reasonable test-case for this to show
> it happening.
> 
> But what we do now (set_page_dirty() before flushing the TLB) really is
> wrong.  And this commit does not fix it, but by moving the dirty
> handling into the TLB gather operation at least the internal interfaces
> now support the notion of those TLB gather interfaces doing the rigth
> thing.
> 
> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
> Cc: Peter Anvin <hpa@xxxxxxxxx>

Acked-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>

> Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
> Cc: linux-arch@xxxxxxxxxxxxxxx
> Cc: linux-mm@xxxxxxxxx
> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> ---
>  arch/arm/include/asm/tlb.h  |  6 ++++--
>  arch/ia64/include/asm/tlb.h |  6 ++++--
>  arch/s390/include/asm/tlb.h |  4 +++-
>  arch/sh/include/asm/tlb.h   |  6 ++++--
>  arch/um/include/asm/tlb.h   |  6 ++++--
>  include/asm-generic/tlb.h   |  4 ++--
>  mm/hugetlb.c                |  4 +---
>  mm/memory.c                 | 15 +++++++++------
>  8 files changed, 31 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
> index 0baf7f0d9394..ac9c16af8e63 100644
> --- a/arch/arm/include/asm/tlb.h
> +++ b/arch/arm/include/asm/tlb.h
> @@ -165,8 +165,10 @@ tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
>  		tlb_flush(tlb);
>  }
>  
> -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
> +static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page, bool dirty)
>  {
> +	if (dirty)
> +		set_page_dirty(page);
>  	tlb->pages[tlb->nr++] = page;
>  	VM_BUG_ON(tlb->nr > tlb->max);
>  	return tlb->max - tlb->nr;
> @@ -174,7 +176,7 @@ static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
>  
>  static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
>  {
> -	if (!__tlb_remove_page(tlb, page))
> +	if (!__tlb_remove_page(tlb, page, 0))
>  		tlb_flush_mmu(tlb);
>  }

So I checked this, and currently the only users of tlb_remove_page() are
the archs for freeing the page table pages and THP. The latter is OK
because it is strictly Anon (for now).

Anybody (/me looks at Kiryl) thinking of making THP work for shared
pages should also cure this.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]