Re: Excessive TLB flush ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 15, 2023 at 11:11:45PM +0200, Thomas Gleixner wrote:
> On Mon, May 15 2023 at 21:46, Thomas Gleixner wrote:
> > On Mon, May 15 2023 at 17:59, Russell King wrote:
> >> On Mon, May 15, 2023 at 06:43:40PM +0200, Thomas Gleixner wrote:
> > That reproduces in a VM easily and has exactly the same behaviour:
> >
> >        Extra page[s] via         The actual allocation
> >        _vm_unmap_aliases() Pages                     Pages Flush start       Pages
> > alloc:                           ffffc9000058e000      2
> > free : ffff888144751000      1   ffffc9000058e000      2   ffff888144751000  17312759359
> >
> > alloc:                           ffffc90000595000      2
> > free : ffff8881424f0000      1   ffffc90000595000      2   ffff8881424f0000  17312768167
> >
> > .....
> >
> > seccomp seems to install 29 BPF programs for that process. So on exit()
> > this results in 29 full TLB flushes on x86, where each of them is used
> > to flush exactly three TLB entries.
> >
> > The actual two page allocation (ffffc9...) is in the vmalloc space, the
> > extra page (ffff88...) is in the direct mapping.
> 
> I tried to flush them one by one, which is actually slightly slower.
> That's not surprising as there are 3 * 29 instead of 29 IPIs and the
> IPIs dominate the picture.
> 
> But that's not necessarily true for ARM32 as there are no IPIs involved
> on the machine we are using, which is a dual-core Cortex-A9.
> 
> So I came up with the hack below, which is equally fast as the full
> flush variant while the performance impact on the other CPUs is minimally
> lower according to perf.
> 
> That probably should have another argument which tells how many TLBs
> this flush affects, i.e. 3 in this example, so an architecture can
> sensibly decide whether it wants to use flush all or not.


> 
> Thanks,
> 
>         tglx
> ---
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1728,6 +1728,7 @@ static bool __purge_vmap_area_lazy(unsig
>  	unsigned int num_purged_areas = 0;
>  	struct list_head local_purge_list;
>  	struct vmap_area *va, *n_va;
> +	struct vmap_area tmp = { .va_start = start, .va_end = end };
>  
>  	lockdep_assert_held(&vmap_purge_lock);
>  
> @@ -1747,7 +1748,12 @@ static bool __purge_vmap_area_lazy(unsig
>  		list_last_entry(&local_purge_list,
>  			struct vmap_area, list)->va_end);
>  
> -	flush_tlb_kernel_range(start, end);
> +	if (tmp.va_end > tmp.va_start)
> +		list_add(&tmp.list, &local_purge_list);
> +	flush_tlb_kernel_vas(&local_purge_list);
> +	if (tmp.va_end > tmp.va_start)
> +		list_del(&tmp.list);

So basically we end up iterating over each VA range, which seems
sensible if the range is large and we have to iterate over it page
by page.

In the case you have, are "start" and "end" set on function entry
to a range, or are they set to ULONG_MAX,0 ? What I'm wondering is
whether we could get away with just having flush_tlb_kernel_vas().

Whether that's acceptable to others is a different question :)

> +
>  	resched_threshold = lazy_max_pages() << 1;
>  
>  	spin_lock(&free_vmap_area_lock);
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -10,6 +10,7 @@
>  #include <linux/debugfs.h>
>  #include <linux/sched/smt.h>
>  #include <linux/task_work.h>
> +#include <linux/vmalloc.h>
>  
>  #include <asm/tlbflush.h>
>  #include <asm/mmu_context.h>
> @@ -1081,6 +1082,24 @@ void flush_tlb_kernel_range(unsigned lon
>  	}
>  }
>  
> +static void do_flush_vas(void *arg)
> +{
> +	struct list_head *list = arg;
> +	struct vmap_area *va;
> +	unsigned long addr;
> +
> +	list_for_each_entry(va, list, list) {
> +		/* flush range by one by one 'invlpg' */
> +		for (addr = va->va_start; addr < va->va_end; addr += PAGE_SIZE)
> +			flush_tlb_one_kernel(addr);

Isn't this just the same as:
	flush_tlb_kernel_range(va->va_start, va->va_end);

at least on ARM32, it should be - the range will be iterated over in
assembly instead of C, although it'll be out of line but should be
slightly faster.

Thanks.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux