On Wed, Feb 12, 2020 at 02:13:56PM +0000, qi.fuli@xxxxxxxxxxx wrote: > On 2/4/20 5:17 AM, Andrea Arcangeli wrote: > > With multiple NUMA nodes and multiple sockets, the tlbi broadcast > > shall be delivered through the interconnects in turn increasing the > > interconnect traffic and the latency of the tlbi broadcast instruction. > > > > Even within a single NUMA node the latency of the tlbi broadcast > > instruction increases almost linearly with the number of CPUs trying to > > send tlbi broadcasts at the same time. > > > > When the process is single threaded however we can achieve full SMP > > scalability by skipping the tlbi broadcasting. Other arches already > > deploy this optimization. > > > > After the local TLB flush this however means the ASID context goes out > > of sync in all CPUs except the local one. This can be tracked in the > > mm_cpumask(mm): if the bit is set it means the asid context is stale > > for that CPU. This results in an extra local ASID TLB flush only if a > > single threaded process is migrated to a different CPU and only after a > > TLB flush. No extra local TLB flush is needed for the common case of > > single threaded processes context scheduling within the same CPU and for > > multithreaded processes. > > > > Skipping the tlbi instruction broadcasting is already implemented in > > local_flush_tlb_all(), this patch only extends it to flush_tlb_mm(), > > flush_tlb_range() and flush_tlb_page() too. > > > > Here's the result of 32 CPUs (ARMv8 Ampere) running mprotect at the same > > time from 32 single threaded processes before the patch: > > > > Performance counter stats for './loop' (3 runs): > > > > 0 dummy > > > > 2.121353 +- 0.000387 seconds time elapsed ( +- 0.02% ) > > > > and with the patch applied: > > > > Performance counter stats for './loop' (3 runs): > > > > 0 dummy > > > > 0.1197750 +- 0.0000827 seconds time elapsed ( +- 0.07% ) > > I have tested this patch on thunderX2 with Himeno benchmark[1] with > LARGE calculation size. Here are the results. > > w/o patch: MFLOPS : 1149.480174 > w/ patch: MFLOPS : 1110.653003 > > In order to validate the effectivness of the patch, I ran a > single-threded program, which calls mprotect() in a loop to issue the > tlbi broadcast instruction on a CPU core. At the same time, I ran Himeno > benchmark on another CPU core. The results are: > > w/o patch: MFLOPS : 860.238792 > w/ patch: MFLOPS : 1110.449666 > > Though Himeno benchmark is a microbenchmark, I hope it helps. It doesn't really help. What if you have a two-thread program calling mprotect() in a loop? IOW, how is this relevant to real-world scenarios? Thanks. -- Catalin