AMD CPUs currently execute WBINVD in the host when unregistering SEV guest memory or when deactivating SEV guests. Such cache maintenance is performed to prevent data corruption, wherein the encrypted (C=1) version of a dirty cache line might otherwise only be written back after the memory is written in a different context (ex: C=0), yielding corruption. However, WBINVD is performance-costly, especially because it invalidates processor caches. Strictly-speaking, unless the SEV ASID is being recycled (meaning all existing cache lines with the recycled ASID must be flushed), the cache invalidation triggered by WBINVD is unnecessary; only the writeback is needed to prevent data corruption in remaining scenarios. To improve performance in these scenarios, use WBNOINVD when available instead of WBINVD. WBNOINVD still writes back all dirty lines (preventing host data corruption by SEV guests) but does *not* invalidate processor caches. First, provide helper functions to use WBNOINVD similar to how WBINVD is invoked. Second, check for WBNOINVD support and execute WBNOINVD if possible in lieu of WBINVD to avoid cache invalidations. Note that I have *not* rebased this series atop proposed targeted flushing optimizations [0], since the optimizations do not yet appear to be finalized. However, I'm happy to do a rebase if that would be helpful. [0] https://lore.kernel.org/kvm/85frlcvjyo.fsf@xxxxxxx/T/ Changelog --- v5: - explicitly encode wbnoinvd as 0xf3 0x0f 0x09 for binutils backwards compatibility v4: - add comments to wbnoinvd() for clarity on when to use and behavior v3: - rebase to tip @ e6609f8bea4a - use WBINVD in wbnoinvd() if X86_FEATURE_WBNOINVD is not present - provide sev_writeback_caches() wrapper function in anticipation of aforementioned [0] targeted flushing optimizations - add Reviewed-by from Mingwei - reword commits/comments v2: - rebase to tip @ dffeaed35cef - drop unnecessary Xen changes - reword commits/comments --- Kevin Loughlin (2): x86, lib: Add WBNOINVD helper functions KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency arch/x86/include/asm/smp.h | 7 +++++ arch/x86/include/asm/special_insns.h | 20 +++++++++++++- arch/x86/kvm/svm/sev.c | 41 ++++++++++++++-------------- arch/x86/lib/cache-smp.c | 12 ++++++++ 4 files changed, 59 insertions(+), 21 deletions(-) -- 2.48.1.262.g85cc9f2d1e-goog