RE: [RFC PATCH v2 00/11] AMD broadcast TLB invalidation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: riel@xxxxxxxxxxx <riel@xxxxxxxxxxx> Sent: Sunday, December 22, 2024 6:55 PM

> 
> Add support for broadcast TLB invalidation using AMD's INVLPGB instruction.

> This allows the kernel to invalidate TLB entries on remote CPUs without
> needing to send IPIs, without having to wait for remote CPUs to handle
> those interrupts, and with less interruption to what was running on
> those CPUs.
> 
> Because x86 PCID space is limited, and there are some very large
> systems out there, broadcast TLB invalidation is only used for
> processes that are active on 3 or more CPUs, with the threshold
> being gradually increased the more the PCID space gets exhausted.

Rik --

What is this patch set's expectation about INVLPGB and TLBSYNC
availability and usage in a VM? I see that INVLPGB and TLBYSNC
behavior in a VM is spec'ed in the AMD Programmer's Manual, but
I wonder about their impact in a multi-tenant host like in a public
cloud environment. And given what this patch set does in assigning
global ASIDs, should X86_FEATURE_INVLPGB be disabled if
running in a VM where the hypervisor for whatever reason has
enabled INVLPGB/TLBSYNC in its VMs?

My knowledge of the details here is pretty limited, so my
question may just reflect my ignorance. But it would be good
for the code comments and/or commit messages to include
explicit statements about what is expected in a VM.

Michael

> 
> Combined with the removal of unnecessary lru_add_drain calls
> (see https://lkml.org/lkml/2024/12/19/1388) this results in a
> nice performance boost for the will-it-scale tlb_flush2_threads
> test on an AMD Milan system with 36 cores:
> 
> - vanilla kernel:           527k loops/second
> - lru_add_drain removal:    731k loops/second
> - only INVLPGB:             527k loops/second
> - lru_add_drain + INVLPGB: 1157k loops/second
> 
> Profiling with only the INVLPGB changes showed while
> TLB invalidation went down from 40% of the total CPU
> time to only around 4% of CPU time, the contention
> simply moved to the LRU lock.
> 
> Fixing both at the same time about doubles the
> number of iterations per second from this case.
> 
> v2:
> - Apply suggestions by Peter and Borislav (thank you!)
> - Fix bug in arch_tlbbatch_flush, where we need to do both
>   the TLBSYNC, and flush the CPUs that are in the cpumask.
> - Some updates to comments and changelogs based on questions.
> 






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux