Re: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 30, 2024 at 09:26:28AM +0000, Kishon Vijay Abraham I wrote:
> From: "Borislav Petkov (AMD)" <bp@xxxxxxxxx>
> 
> commit 04c3024560d3a14acd18d0a51a1d0a89d29b7eb5 upstream.
> 
> AMD does not have the requirement for a synchronization barrier when
> acccessing a certain group of MSRs. Do not incur that unnecessary
> penalty there.
> 
> There will be a CPUID bit which explicitly states that a MFENCE is not
> needed. Once that bit is added to the APM, this will be extended with
> it.
> 
> While at it, move to processor.h to avoid include hell. Untangling that
> file properly is a matter for another day.
> 
> Some notes on the performance aspect of why this is relevant, courtesy
> of Kishon VijayAbraham <Kishon.VijayAbraham@xxxxxxx>:
> 
> On a AMD Zen4 system with 96 cores, a modified ipi-bench[1] on a VM
> shows x2AVIC IPI rate is 3% to 4% lower than AVIC IPI rate. The
> ipi-bench is modified so that the IPIs are sent between two vCPUs in the
> same CCX. This also requires to pin the vCPU to a physical core to
> prevent any latencies. This simulates the use case of pinning vCPUs to
> the thread of a single CCX to avoid interrupt IPI latency.
> 
> In order to avoid run-to-run variance (for both x2AVIC and AVIC), the
> below configurations are done:
> 
>   1) Disable Power States in BIOS (to prevent the system from going to
>      lower power state)
> 
>   2) Run the system at fixed frequency 2500MHz (to prevent the system
>      from increasing the frequency when the load is more)
> 
> With the above configuration:
> 
> *) Performance measured using ipi-bench for AVIC:
>   Average Latency:  1124.98ns [Time to send IPI from one vCPU to another vCPU]
> 
>   Cumulative throughput: 42.6759M/s [Total number of IPIs sent in a second from
>   				     48 vCPUs simultaneously]
> 
> *) Performance measured using ipi-bench for x2AVIC:
>   Average Latency:  1172.42ns [Time to send IPI from one vCPU to another vCPU]
> 
>   Cumulative throughput: 40.9432M/s [Total number of IPIs sent in a second from
>   				     48 vCPUs simultaneously]
> 
> >From above, x2AVIC latency is ~4% more than AVIC. However, the expectation is
> x2AVIC performance to be better or equivalent to AVIC. Upon analyzing
> the perf captures, it is observed significant time is spent in
> weak_wrmsr_fence() invoked by x2apic_send_IPI().
> 
> With the fix to skip weak_wrmsr_fence()
> 
> *) Performance measured using ipi-bench for x2AVIC:
>   Average Latency:  1117.44ns [Time to send IPI from one vCPU to another vCPU]
> 
>   Cumulative throughput: 42.9608M/s [Total number of IPIs sent in a second from
>   				     48 vCPUs simultaneously]
> 
> Comparing the performance of x2AVIC with and without the fix, it can be seen
> the performance improves by ~4%.
> 
> Performance captured using an unmodified ipi-bench using the 'mesh-ipi' option
> with and without weak_wrmsr_fence() on a Zen4 system also showed significant
> performance improvement without weak_wrmsr_fence(). The 'mesh-ipi' option ignores
> CCX or CCD and just picks random vCPU.
> 
>   Average throughput (10 iterations) with weak_wrmsr_fence(),
>         Cumulative throughput: 4933374 IPI/s
> 
>   Average throughput (10 iterations) without weak_wrmsr_fence(),
>         Cumulative throughput: 6355156 IPI/s
> 
> [1] https://github.com/bytedance/kvm-utils/tree/master/microbenchmark/ipi-bench
> 
> Cc: stable@xxxxxxxxxxxxxxx # 6.6+
> Signed-off-by: Borislav Petkov (AMD) <bp@xxxxxxxxx>
> Link: https://lore.kernel.org/r/20230622095212.20940-1-bp@xxxxxxxxx
> Signed-off-by: Kishon Vijay Abraham I <kvijayab@xxxxxxx>
> ---
> Kindly merge this patch to stable releases (v6.6+) as it's a perf optimization.
> [It does not apply as is on earlier releases and have to be reworked]

Sorry for the delay, now queued up.

greg k-h




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux