On 04/01/2014 06:53 AM, Ingo Molnar wrote: > > The speedup looks good to me! > > I have one major concern (see the last item), plus a few minor nits: I will address all the minor issues. Let me explain the major one :) >> @@ -196,6 +201,13 @@ static inline void reset_lazy_tlbstate(void) >> this_cpu_write(cpu_tlbstate.active_mm, &init_mm); >> } >> >> +static inline void tlb_set_force_flush(int cpu) >> +{ >> + struct tlb_state *percputlb= &per_cpu(cpu_tlbstate, cpu); > > s/b= /b = / > >> + if (percputlb->force_flush == false) >> + percputlb->force_flush = true; >> +} >> + >> #endif /* SMP */ This code does a test before the set, so each cache line will only be grabbed exclusively once, if there is heavy pageout scanning activity. >> @@ -399,11 +400,13 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma, >> int ptep_clear_flush_young(struct vm_area_struct *vma, >> unsigned long address, pte_t *ptep) >> { >> - int young; >> + int young, cpu; >> >> young = ptep_test_and_clear_young(vma, address, ptep); >> - if (young) >> - flush_tlb_page(vma, address); >> + if (young) { >> + for_each_cpu(cpu, vma->vm_mm->cpu_vm_mask_var) >> + tlb_set_force_flush(cpu); > > Hm, just to play the devil's advocate - what happens when we have a va > that is used on a few dozen, a few hundred or a few thousand CPUs? > Will the savings be dwarved by the O(nr_cpus_used) loop overhead? > > Especially as this is touching cachelines on other CPUs and likely > creating the worst kind of cachemisses. That can really kill > performance. flush_tlb_page does the same O(nr_cpus_used) loop, but it sends an IPI to each CPU every time, instead of dirtying a cache line once per pageout run (or until the next context switch). Does that address your concern? -- All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>