Maybe it’s me, but I find it rather hard to figure out whether flush_tlb_func_common() is safe, since it can be re-entered - if a local TLB flush is performed, and during this local flush a remote shootdown IPI is received. Did I miss irq being disabled during the local flush? Otherwise, it raises the question whether flush_tlb_func_common() changes were designed with re-entry in mind. Regarding it in the comments would really be helpful. Anyhow, I suspect that at least the following warning can be triggered: WARN_ON_ONCE(local_tlb_gen > mm_tlb_gen); > static void flush_tlb_func_common(const struct flush_tlb_info *f, > bool local, enum tlb_flush_reason reason) > { > + struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm); > + > + /* > + * Our memory ordering requirement is that any TLB fills that > + * happen after we flush the TLB are ordered after we read > + * active_mm's tlb_gen. We don't need any explicit barrier > + * because all x86 flush operations are serializing and the > + * atomic64_read operation won't be reordered by the compiler. > + */ > + u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen); If for example a shootdown IPI can be delivered here... > + u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[0].tlb_gen); > + -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href