In the page reclaim code, we only track the CPU(s) where the TLB needs to be flushed, rather than all the individual mappings that may be getting invalidated. Use broadcast TLB flushing when that is available. Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx> --- arch/x86/mm/tlb.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 266d5174fc7b..64f1679c37e1 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1310,8 +1310,16 @@ EXPORT_SYMBOL_GPL(__flush_tlb_all); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) { struct flush_tlb_info *info; + int cpu; + + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { + guard(preempt)(); + invlpgb_flush_all_nonglobals(); + tlbsync(); + return; + } - int cpu = get_cpu(); + cpu = get_cpu(); info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, TLB_GENERATION_INVALID); -- 2.47.1