On Wed, May 29, 2013 at 11:04:35PM +0100, Catalin Marinas wrote: > On 29 May 2013 18:51, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -384,6 +384,21 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table) > > > > #endif /* CONFIG_HAVE_RCU_TABLE_FREE */ > > > > +static inline void cond_resched_tlb(struct mmu_gather *tlb) > > +{ > > +#ifndef CONFIG_PREEMPT > > + /* > > + * For full preempt kernels we must do regular batching like > > + * SMP, see tlb_fast_mode(). For !PREEMPT we can 'cheat' and > > + * do a flush before our voluntary 'yield'. > > + */ > > + if (need_resched()) { > > + tlb_flush_mmu(tlb); > > + cond_resched(); > > + } > > +#endif > > +} > > Does it matter that in the CONFIG_PREEMPT case, you no longer call > cond_resched()? I guess we can just rely on the kernel full preemption > to reschedule as needed. Exactly, the preempt_enable from the spin_unlock in pte_unmap_unlock() will most likely immediately trigger a preemption. And since we do full batching for PREEMPT doing extra flushes would be detrimental for performance -- however unlikely it is we'll still see the need_resched() there. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>