On 05/23/23 at 04:02pm, Thomas Gleixner wrote: > vmap blocks which have active mappings cannot be purged. Allocations which > have been freed are accounted for in vmap_block::dirty_min/max, so that > they can be detected in _vm_unmap_aliases() as potentially stale TLBs. > > If there are several invocations of _vm_unmap_aliases() then each of them > will flush the dirty range. That's pointless and just increases the > probability of full TLB flushes. > > Avoid that by resetting the flush range after accounting for it. That's > safe versus other invocations of _vm_unmap_aliases() because this is all > serialized with vmap_purge_lock. > > Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > --- > mm/vmalloc.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2224,7 +2224,7 @@ static void vb_free(unsigned long addr, > > spin_lock(&vb->lock); > > - /* Expand dirty range */ > + /* Expand the not yet TLB flushed dirty range */ > vb->dirty_min = min(vb->dirty_min, offset); > vb->dirty_max = max(vb->dirty_max, offset + (1UL << order)); > > @@ -2262,7 +2262,7 @@ static void _vm_unmap_aliases(unsigned l > * space to be flushed. > */ > if (!purge_fragmented_block(vb, vbq, &purge_list) && > - vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > + vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) { > unsigned long va_start = vb->va->va_start; > unsigned long s, e; > > @@ -2272,6 +2272,10 @@ static void _vm_unmap_aliases(unsigned l > start = min(s, start); > end = max(e, end); > > + /* Prevent that this is flushed more than once */ > + vb->dirty_min = VMAP_BBMAP_BITS; > + vb->dirty_max = 0; > + This is really a great catch and improvement. Reviewed-by: Baoquan He <bhe@xxxxxxxxxx> > flush = 1; > } > spin_unlock(&vb->lock); >