Now that we have lazy user asid flushing, use that even if we have INVPCID. Even if INVPCID would not be slower than a flushing CR3 write (it is) this allows folding multiple user flushes. Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> --- arch/x86/include/asm/tlbflush.h | 38 ++++++++++++++------------------------ 1 file changed, 14 insertions(+), 24 deletions(-) --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -377,33 +377,23 @@ static inline void flush_user_asid(u16 a static inline void __native_flush_tlb(void) { - if (!cpu_feature_enabled(X86_FEATURE_INVPCID)) { - flush_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); + flush_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); - /* - * If current->mm == NULL then we borrow a mm - * which may change during a task switch and - * therefore we must not be preempted while we - * write CR3 back: - */ - preempt_disable(); - native_write_cr3(__native_read_cr3()); - preempt_enable(); - /* - * Does not need tlb_flush_shared_nonglobals() - * since the CR3 write without PCIDs flushes all - * non-globals. - */ - return; - } /* - * We are no longer using globals with KAISER, so a - * "nonglobals" flush would work too. But, this is more - * conservative. - * - * Note, this works with CR4.PCIDE=0 or 1. + * If current->mm == NULL then we borrow a mm + * which may change during a task switch and + * therefore we must not be preempted while we + * write CR3 back: */ - invpcid_flush_all(); + preempt_disable(); + native_write_cr3(__native_read_cr3()); + preempt_enable(); + /* + * Does not need tlb_flush_shared_nonglobals() + * since the CR3 write without PCIDs flushes all + * non-globals. + */ + return; } static inline void __native_flush_tlb_global_irq_disabled(void) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>