On Wed, Oct 26, 2022 at 10:43:21PM +0300, Nadav Amit wrote: > On Oct 25, 2022, at 6:06 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > if (!force_flush && !tlb->fullmm && details && > > + details->zap_flags & ZAP_FLAG_FORCE_FLUSH) > > + force_flush = 1; > > Isn’t it too big of a hammer? It is the obvious hammer :-) TLB invalidate under pte_lock when clearing. > At the same time, the whole reasoning about TLB flushes is not getting any > simpler. We had cases in which MADV_DONTNEED and another concurrent > operation that effectively zapped PTEs (e.g., another MADV_DONTNEED) caused > the zap_pte_range() to skip entries since pte_none() was true. To resolve > these cases we relied on tlb_finish_mmu() to flush the range when needed > (i.e., flush the whole range when mm_tlb_flush_nested()). Yeah, whoever thought that allowing concurrency there was a great idea :/ And I must admit to hating the pending thing with a passion. And that mm_tlb_flush_nested() thing in tlb_finish_mmu() is a giant hack at the best of times. Also; I feel it's part of the problem here; it violates the basic rules we've had for a very long time. > Now, I do not have a specific broken scenario in mind following this change, > but it is all sounds to me a bit dangerous and at same time can potentially > introduce new overheads. I'll take correctness over being fast. As you say, this whole TLB thing is getting out of hand. > One alternative may be using mm_tlb_flush_pending() when setting a new PTE > to check for pending flushes and flushing the TLB if that is the case. This > is somewhat similar to what ptep_clear_flush() does. Anyhow, I guess this > might induce some overheads. As noted before, it is possible to track > pending TLB flushes in VMA/page-table granularity, with different tradeoffs > of overheads. Right; I just don't believe in VMAs for this, they're *waaay* to big.