On Tue, Jul 11, 2017 at 12:18 PM, Mel Gorman <mgorman@xxxxxxx> wrote: I would change this slightly: > +void flush_tlb_batched_pending(struct mm_struct *mm) > +{ > + if (mm->tlb_flush_batched) { > + flush_tlb_mm(mm); How about making this a new helper arch_tlbbatch_flush_one_mm(mm); The idea is that this could be implemented as flush_tlb_mm(mm), but the actual semantics needed are weaker. All that's really needed AFAICS is to make sure that any arch_tlbbatch_add_mm() calls on this mm that have already happened become effective by the time that arch_tlbbatch_flush_one_mm() returns. The initial implementation would be this: struct flush_tlb_info info = { .mm = mm, .new_tlb_gen = atomic64_read(&mm->context.tlb_gen); .start = 0, .end = TLB_FLUSH_ALL, }; and the rest is like flush_tlb_mm_range(). flush_tlb_func_common() will already do the right thing, but the comments should probably be updated, too. The benefit would be that, if you just call this on an mm when everything is already flushed, it will still do the IPIs but it won't do the actual flush. A better future implementation could iterate over each cpu in mm_cpumask(), and, using either a new lock or very careful atomics, check whether that CPU really needs flushing. In -tip, all the information needed to figure this out is already there in the percpu state -- it's just not currently set up for remote access. For backports, it would just be flush_tlb_mm(). --Andy -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>