From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> Intel TDX protects guest VMs from malicious host and certain physical attacks. The TDX module uses pages provided by the host for both control structures and for TD guest pages. These pages are encrypted using the MK-TME encryption engine, with its special requirements around cache invalidation. For its own security, the TDX module ensures pages are flushed properly and track which usage they are currently assigned. For creating and tearing down TD VMs and vCPUs KVM will need to use the TDH.PHYMEM.PAGE.RECLAIM, TDH.PHYMEM.CACHE.WB, and TDH.PHYMEM.PAGE.WBINVD SEAMCALLs. Add tdh_phymem_page_reclaim() to enable KVM to call TDH.PHYMEM.PAGE.RECLAIM to reclaim the page for use by the host kernel. This effectively resets its state in the TDX module's page tracking (PAMT), if the page is available to be reclaimed. This will be used by KVM to reclaim the various types of pages owned by the TDX module. It will have a small wrapper in KVM that retries in the case of a relevant error code. Don't implement this wrapper in arch/x86 because KVM's solution around retrying SEAMCALLs will be better located in a single place. Add tdh_phymem_cache_wb() to enable KVM to call TDH.PHYMEM.CACHE.WB to do a cache write back in a way that the TDX module can verify, before it allows a KeyID to be freed. The KVM code will use this to have a small wrapper that handles retries. Since the TDH.PHYMEM.CACHE.WB operation is interruptible, have tdh_phymem_cache_wb() take a resume argument to pass this info to the TDX module for restarts. It is worth noting that this SEAMCALL uses a SEAM specific MSR to do the write back in sections. In this way it does export some new functionality that affects CPU state. Add tdh_phymem_page_wbinvd_tdr() to enable KVM to call TDH.PHYMEM.PAGE.WBINVD to do a cache write back and invalidate of a TDR, using the global KeyID. The underlying TDH.PHYMEM.PAGE.WBINVD SEAMCALL requires the related KeyID to be encoded into the SEAMCALL args. Since the global KeyID is not exposed to KVM, a dedicated wrapper is needed for TDR focused TDH.PHYMEM.PAGE.WBINVD operations. Co-developed-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx> Reviewed-by: Binbin Wu <binbin.wu@xxxxxxxxxxxxxxx> Reviewed-by: Yuan Yao <yuan.yao@xxxxxxxxx> --- uAPI breakout v2: - Change to use 'u64' as function parameter to prepare to move SEAMCALL wrappers to arch/x86. (Kai) - Split to separate patch - Move SEAMCALL wrappers from KVM to x86 core; - Move TDH_xx macros from KVM to x86 core; - Re-write log uAPI breakout v1: - Make argument to C wrapper function struct kvm_tdx * or struct vcpu_tdx * .(Sean) - Drop unused helpers (Kai) - Fix bisectability issues in headers (Kai) - Updates from seamcall overhaul (Kai) v19: - Update the commit message to match the patch by Yuan - Use seamcall() and seamcall_ret() by paolo v18: - removed stub functions for __seamcall{,_ret}() - Added Reviewed-by Binbin - Make tdx_seamcall() use struct tdx_module_args instead of taking each inputs. v16: - use struct tdx_module_args instead of struct tdx_module_output - Add tdh_mem_sept_rd() for SEPT_VE_DISABLE=1. --- arch/x86/include/asm/tdx.h | 3 +++ arch/x86/virt/vmx/tdx/tdx.c | 44 +++++++++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx/tdx.h | 4 +++- 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 6951faa37031..0cf8975759de 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -132,6 +132,9 @@ u64 tdh_mng_key_freeid(u64 tdr); u64 tdh_mng_init(u64 tdr, u64 td_params, u64 *rcx); u64 tdh_vp_init(u64 tdvpr, u64 initial_rcx); u64 tdh_vp_init_apicid(u64 tdvpr, u64 initial_rcx, u32 x2apicid); +u64 tdh_phymem_page_reclaim(u64 page, u64 *rcx, u64 *rdx, u64 *r8); +u64 tdh_phymem_cache_wb(bool resume); +u64 tdh_phymem_page_wbinvd_tdr(u64 tdr); #else static inline void tdx_init(void) { } static inline int tdx_cpu_enable(void) { return -ENODEV; } diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c index b3003031e0fe..7e7c2e2360af 100644 --- a/arch/x86/virt/vmx/tdx/tdx.c +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -1670,3 +1670,47 @@ u64 tdh_vp_init_apicid(u64 tdvpr, u64 initial_rcx, u32 x2apicid) return seamcall(TDH_VP_INIT | (1ULL << TDX_VERSION_SHIFT), &args); } EXPORT_SYMBOL_GPL(tdh_vp_init_apicid); + +u64 tdh_phymem_page_reclaim(u64 page, u64 *rcx, u64 *rdx, u64 *r8) +{ + struct tdx_module_args args = { + .rcx = page, + }; + u64 ret; + + ret = seamcall_ret(TDH_PHYMEM_PAGE_RECLAIM, &args); + + /* + * Additional error information: + * + * - RCX: page type + * - RDX: owner + * - R8: page size (4K, 2M or 1G) + */ + *rcx = args.rcx; + *rdx = args.rdx; + *r8 = args.r8; + + return ret; +} +EXPORT_SYMBOL_GPL(tdh_phymem_page_reclaim); + +u64 tdh_phymem_cache_wb(bool resume) +{ + struct tdx_module_args args = { + .rcx = resume ? 1 : 0, + }; + + return seamcall(TDH_PHYMEM_CACHE_WB, &args); +} +EXPORT_SYMBOL_GPL(tdh_phymem_cache_wb); + +u64 tdh_phymem_page_wbinvd_tdr(u64 tdr) +{ + struct tdx_module_args args = {}; + + args.rcx = tdr | ((u64)tdx_global_keyid << boot_cpu_data.x86_phys_bits); + + return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args); +} +EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_tdr); diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h index 64b6504791e1..191bdd1e571d 100644 --- a/arch/x86/virt/vmx/tdx/tdx.h +++ b/arch/x86/virt/vmx/tdx/tdx.h @@ -26,14 +26,16 @@ #define TDH_MNG_INIT 21 #define TDH_VP_INIT 22 #define TDH_PHYMEM_PAGE_RDMD 24 +#define TDH_PHYMEM_PAGE_RECLAIM 28 #define TDH_SYS_KEY_CONFIG 31 #define TDH_SYS_INIT 33 #define TDH_SYS_RD 34 #define TDH_SYS_LP_INIT 35 #define TDH_SYS_TDMR_INIT 36 +#define TDH_PHYMEM_CACHE_WB 40 +#define TDH_PHYMEM_PAGE_WBINVD 41 #define TDH_SYS_CONFIG 45 - /* * SEAMCALL leaf: * -- 2.47.0