On 05/29/2013 07:59 PM, Catalin Marinas wrote: > On Wed, May 29, 2013 at 03:08:37PM +0100, Vineet Gupta wrote: >> On 05/29/2013 07:33 PM, Catalin Marinas wrote: >>> On Wed, May 29, 2013 at 01:56:13PM +0100, Vineet Gupta wrote: >>>> zap_pte_range loops from @addr to @end. In the middle, if it runs out of >>>> batching slots, TLB entries needs to be flushed for @start to @interim, >>>> NOT @interim to @end. >>>> >>>> Since ARC port doesn't use page free batching I can't test it myself but >>>> this seems like the right thing to do. >>>> Observed this when working on a fix for the issue at thread: >>>> http://www.spinics.net/lists/linux-arch/msg21736.html >>>> >>>> Signed-off-by: Vineet Gupta <vgupta@xxxxxxxxxxxx> >>>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >>>> Cc: Mel Gorman <mgorman@xxxxxxx> >>>> Cc: Hugh Dickins <hughd@xxxxxxxxxx> >>>> Cc: Rik van Riel <riel@xxxxxxxxxx> >>>> Cc: David Rientjes <rientjes@xxxxxxxxxx> >>>> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> >>>> Cc: linux-mm@xxxxxxxxx >>>> Cc: linux-arch@xxxxxxxxxxxxxxx <linux-arch@xxxxxxxxxxxxxxx> >>>> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> >>>> Cc: Max Filippov <jcmvbkbc@xxxxxxxxx> >>>> --- >>>> mm/memory.c | 9 ++++++--- >>>> 1 file changed, 6 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 6dc1882..d9d5fd9 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -1110,6 +1110,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, >>>> spinlock_t *ptl; >>>> pte_t *start_pte; >>>> pte_t *pte; >>>> + unsigned long range_start = addr; >>>> >>>> again: >>>> init_rss_vec(rss); >>>> @@ -1215,12 +1216,14 @@ again: >>>> force_flush = 0; >>>> >>>> #ifdef HAVE_GENERIC_MMU_GATHER >>>> - tlb->start = addr; >>>> - tlb->end = end; >>>> + tlb->start = range_start; >>>> + tlb->end = addr; >>>> #endif >>>> tlb_flush_mmu(tlb); >>>> - if (addr != end) >>>> + if (addr != end) { >>>> + range_start = addr; >>>> goto again; >>>> + } >>>> } >>> Isn't this code only run if force_flush != 0? force_flush is set to >>> !__tlb_remove_page() and this function always returns 1 on (generic TLB) >>> UP since tlb_fast_mode() is 1. There is no batching on UP with the >>> generic TLB code. >> Correct ! That's why the changelog says I couldn't test it on ARC port itself :-) >> >> However based on the other discussion (Max's TLB/PTE inconsistency), as I started >> writing code to reuse this block to flush the TLB even for non forced case, I >> realized that what this is doing is incorrect and won't work for the general flushing. > An alternative would be to make sure the above block is always called > when tlb_fast_mode(): > > diff --git a/mm/memory.c b/mm/memory.c > index 6dc1882..f8b1f30 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1211,7 +1211,7 @@ again: > * the PTE lock to avoid doing the potential expensive TLB invalidate > * and page-free while holding it. > */ > - if (force_flush) { > + if (force_flush || tlb_fast_mode(tlb)) { > force_flush = 0; I agree with tlb_fast_mode() addition (to solve Max's issue). The problem however is that when we hit this at the end of loop - @addr is already pointing to @end so range flush gets start = end - not what we really intended. >> Ignoring all other threads, do we agree that the exiting code - if used in any >> situations is incorrect semantically ? > It is incorrect unless there are requirements for > arch_leave_lazy_mmu_mode() to handle the TLB invalidation (it doesn't > look like it's widely implemented though). This patch is preparatory - independent of Max's issue. It is fixing just the forced flush case - whoever uses it right now (ofcourse UP + generic TLB doesn't). Thx, -Vineet -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>