Re: Excessive TLB flush ranges

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 17, 2023 at 12:31:04PM +0200, Thomas Gleixner wrote:
> On Tue, May 16 2023 at 18:23, Nadav Amit wrote:
> >> On May 16, 2023, at 5:23 PM, Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:

> > My experience with non-IPI based TLB invalidations is more limited. IIUC
> > the usage model is that the TLB shootdowns should be invoked ASAP
> > (perhaps each range can be batched, but there is no sense of batching
> > multiple ranges), and then later you would issue some barrier to ensure
> > prior TLB shootdown invocations have been completed.
> >
> > If that is the (use) case, I am not sure the abstraction you used in
> > your prototype is the best one.
> 
> The way how arm/arm64 implement that in software is:
> 
>     magic_barrier1();
>     flush_range_with_magic_opcodes();
>     magic_barrier2();

FWIW, on arm64 that sequence (for leaf entries only) is:

	/*
	 * Make sure prior writes to the page table entries are visible to all
	 * CPUs, so that *subsequent* page table walks will see the latest
	 * values.
	 *
	 * This is roughly __smp_wmb().
	 */
	dsb(ishst)		// AKA magic_barrier1()

	/*
	 * The "TLBI *IS, <addr>" instructions send a message to all other
	 * CPUs, essentially saying "please start invalidating entries for
	 * <addr>"
	 *
	 * The "TLBI *ALL*IS" instructions send a message to all other CPUs,
	 * essentially saying "please start invalidating all entries".
	 *
	 * In theory, this could be for discontiguous ranges.
	 */
	flush_range_with_magic_opcodes()

	/*
	 * Wait for acknowledgement that all prior TLBIs have completed. This
	 * also ensures that all accesses using those translations have also
	 * completed.
	 *
	 * This waits for all relevant CPUs to acknowledge completion of any
	 * prior TLBIs sent by this CPU.
	 */
	dsb(ish) 		// AKA magic_barrier2()
	isb()

So you can batch a bunch of "TLBI *IS, <addr>" with a single barrier for
completion, or you can use a single "TLBI *ALL*IS" to invalidate everything.

It can still be worth using the latter, as arm64 has done since commit:

  05ac65305437e8ef ("arm64: fix soft lockup due to large tlb flush range")

... as for a large range, issuing a bunch of "TLBI *IS, <addr>" can take a
while, and can require the recipient CPUs to do more work than they might have
to do for a single "TLBI *ALL*IS".

The point at which invalidating everything is better depends on a number of
factors (e.g. the impact of all CPUs needing to make new page table walks), and
currently we have an arbitrary boundary where we choose to invalidate
everything (which has been tweaked a bit over time); there isn't really a
one-size-fits-all best answer.

Thanks,
Mark.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux