> On Jun 5, 2017, at 3:36 PM, Andy Lutomirski <luto@xxxxxxxxxx> wrote: > > x86's lazy TLB mode used to be fairly weak -- it would switch to > init_mm the first time it tried to flush a lazy TLB. This meant an > unnecessary CR3 write and, if the flush was remote, an unnecessary > IPI. > > Rewrite it entirely. When we enter lazy mode, we simply remove the > cpu from mm_cpumask. This means that we need a way to figure out > whether we've missed a flush when we switch back out of lazy mode. > I use the tlb_gen machinery to track whether a context is up to > date. > [snip] > @@ -67,133 +67,118 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > { > [snip] > + /* Resume remote flushes and then read tlb_gen. */ > + cpumask_set_cpu(cpu, mm_cpumask(next)); > + next_tlb_gen = atomic64_read(&next->context.tlb_gen); It seems correct, but it got me somewhat confused at first. Perhaps it worth a comment that a memory barrier is not needed since cpumask_set_cpu() uses a locked-instruction. Otherwise, somebody may even copy-paste it to another architecture... Thanks, Nadav -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href