On Tue, Jun 6, 2017 at 12:11 PM, Rik van Riel <riel@xxxxxxxxxx> wrote: > On Mon, 2017-06-05 at 15:36 -0700, Andy Lutomirski wrote: > >> +++ b/arch/x86/include/asm/mmu_context.h >> @@ -122,8 +122,10 @@ static inline void switch_ldt(struct mm_struct >> *prev, struct mm_struct *next) >> >> static inline void enter_lazy_tlb(struct mm_struct *mm, struct >> task_struct *tsk) >> { >> - if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) >> - this_cpu_write(cpu_tlbstate.state, TLBSTATE_LAZY); >> + int cpu = smp_processor_id(); >> + >> + if (cpumask_test_cpu(cpu, mm_cpumask(mm))) >> + cpumask_clear_cpu(cpu, mm_cpumask(mm)); >> } > > This is an atomic write to a shared cacheline, > every time a CPU goes idle. > > I am not sure you really want to do this, since > there are some workloads out there that have a > crazy number of threads, which go idle hundreds, > or even thousands of times a second, on dozens > of CPUs at a time. *cough*Java*cough* It seems to me that the set of workloads on which this patch will hurt performance is rather limited. We'd need an mm with a lot of threads, probably spread among a lot of nodes, that is constantly going idle and non-idle on multiple CPUs on the same node, where there's nothing else happening on those CPUs. If there's a low-priority background task on the relevant CPUs, then existing kernels will act just like patched kernels: the same bit will be written by the same atomic operation at the same times. > > Keeping track of the state in a CPU-local variable, > written with a non-atomic write, would be much more > CPU cache friendly here. We could, but then handing remote flushes becomes more complicated. My inclination would be to keep the patch as is and, if this is actually a problem, think about solving it more generally. The real issue is that we need a way to reasonably efficiently find the set of CPUs for which a given mm is currently loaded and non-lazy. A simple improvement would be to split up mm_cpumask so that we'd have one cache line per node. (And we'd presumably allow several mms to share the same pile of memory.) Or we could go all out and use percpu state only and iterate over all online CPUs when flushing (ick!). Or something in between. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>