X86 TLB range flushing uses a balance point to decide if a single global TLB flush or multiple single page flushes would perform best. This patch takes into account how many CPUs must be flushed. Signed-off-by: Mel Gorman <mgorman@xxxxxxx> --- arch/x86/mm/tlb.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 09b8cb8..0cababa 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -217,6 +217,9 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, act_entries = mm->total_vm > act_entries ? act_entries : mm->total_vm; nr_base_pages = (end - start) >> PAGE_SHIFT; + /* Take the number of CPUs to range flush into account */ + nr_base_pages *= cpumask_weight(mm_cpumask(mm)); + /* tlb_flushall_shift is on balance point, details in commit log */ if (nr_base_pages > act_entries || has_large_page(mm, start, end)) { count_vm_event(NR_TLB_LOCAL_FLUSH_ALL); -- 1.8.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>