Re: TLB and PTE coherency during munmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 29, 2013 at 11:04:35PM +0100, Catalin Marinas wrote:
> On 29 May 2013 18:51, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -384,6 +384,21 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
> >
> >  #endif /* CONFIG_HAVE_RCU_TABLE_FREE */
> >
> > +static inline void cond_resched_tlb(struct mmu_gather *tlb)
> > +{
> > +#ifndef CONFIG_PREEMPT
> > +       /*
> > +        * For full preempt kernels we must do regular batching like
> > +        * SMP, see tlb_fast_mode(). For !PREEMPT we can 'cheat' and
> > +        * do a flush before our voluntary 'yield'.
> > +        */
> > +       if (need_resched()) {
> > +               tlb_flush_mmu(tlb);
> > +               cond_resched();
> > +       }
> > +#endif
> > +}
> 
> Does it matter that in the CONFIG_PREEMPT case, you no longer call
> cond_resched()? I guess we can just rely on the kernel full preemption
> to reschedule as needed.

Exactly, the preempt_enable from the spin_unlock in pte_unmap_unlock()
will most likely immediately trigger a preemption.

And since we do full batching for PREEMPT doing extra flushes would be
detrimental for performance -- however unlikely it is we'll still see
the need_resched() there.
--
To unsubscribe from this list: send the line "unsubscribe linux-arch" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux