> > Here is the updated version. > > KVM: TDX: Add placeholders for TDX VM/vcpu structure > > Add placeholders TDX VM/vCPU structure, overlaying with the existing ^ structures "TDX VM/vCPU structure" -> "TDX VM/vCPU structures". And I don't quite understand what does "overlaying" mean here. > VMX VM/vCPU structures. Initialize VM structure size and vCPU > size/align so that x86 KVM-common code knows those sizes irrespective > of VMX or TDX. Those structures will be populated as guest creation > logic develops. > > TDX requires its data structure for guest and vcpu. For VMX, we I don't think TDX "requires" anything here. Introducing separate structures are software implementation, but not requirement by TDX. > already have struct kvm_vmx and struct vcpu_vmx. Two options to add > TDX-specific members. > > 1. Append TDX-specific members to kvm_vmx and vcpu_vmx. Use the same > struct for both VMX and TDX. > 2. Define TDX-specific data struct and overlay. > > Choose option two because it has less memory overhead and what member > is needed is clearer > > Add helper functions to check if the VM is guest TD and add the conversion > functions between KVM VM/vCPU and TDX VM/vCPU. FYI: Add TDX's own VM and vCPU structures as placeholder to manage and run TDX guests. TDX protects guest VMs from malicious host. Unlike VMX guests, TDX guests are crypto-protected. KVM cannot access TDX guests' memory and vCPU states directly. Instead, TDX requires KVM to use a set of architecture-defined firmware APIs (a.k.a TDX module SEAMCALLs) to manage and run TDX guests. In fact, the way to manage and run TDX guests and normal VMX guests are quite different. Because of that, the current structures ('struct kvm_vmx' and 'struct vcpu_vmx') to manage VMX guests are not quite suitable for TDX guests. E.g., the majority of the members of 'struct vcpu_vmx' don't apply to TDX guests. Introduce TDX's own VM and vCPU structures ('struct kvm_tdx' and 'struct vcpu_tdx' respectively) for KVM to manage and run TDX guests. And instead of building TDX's VM and vCPU structures based on VMX's, build them directly based on 'struct kvm'. As a result, TDX and VMX will have different VM size and vCPU size/alignment. Adjust the 'vt_x86_ops.vm_size' and the 'vcpu_size' and 'vcpu_align' to the maximum value of TDX guest and VMX guest during module initialization time so that KVM can always allocate enough memory for both TDX guests and VMX guests. [...] > > > > > @@ -215,8 +219,18 @@ static int __init vt_init(void) > > > * Common KVM initialization _must_ come last, after this, /dev/kvm is > > > * exposed to userspace! > > > */ > > > + /* > > > + * kvm_x86_ops is updated with vt_x86_ops. vt_x86_ops.vm_size must > > > + * be set before kvm_x86_vendor_init(). > > > + */ > > > vcpu_size = sizeof(struct vcpu_vmx); > > > vcpu_align = __alignof__(struct vcpu_vmx); > > > + if (enable_tdx) { > > > + vcpu_size = max_t(unsigned int, vcpu_size, > > > + sizeof(struct vcpu_tdx)); > > > + vcpu_align = max_t(unsigned int, vcpu_align, > > > + __alignof__(struct vcpu_tdx)); > > > + } > > > > Since you are updating vm_size in vt_hardware_setup(), I am wondering > > whether we can do similar thing for vcpu_size and vcpu_align. > > > > That is, we put them both to 'struct kvm_x86_ops', and you update them in > > vt_hardware_setup(). > > > > kvm_init() can then just access them directly in this way both 'vcpu_size' > > and 'vcpu_align' function parameters can be removed. > > Hmm, now I noticed the vm_size can be moved here. We have > > vcpu_size = sizeof(struct vcpu_vmx); > vcpu_align = __alignof__(struct vcpu_vmx); > if (enable_tdx) { > vcpu_size = max_t(unsigned int, vcpu_size, > sizeof(struct vcpu_tdx)); > vcpu_align = max_t(unsigned int, vcpu_align, > __alignof__(struct vcpu_tdx)); > vt_x86_ops.vm_size = max_t(unsigned int, vt_x86_ops.vm_size, > sizeof(struct kvm_tdx)); > } > > > We can add vcpu_size, vcpu_align to struct kvm_x86_ops. If we do so, we have > to touch svm code unnecessarily. Not only SVM, but also other architectures, because you are going to remove two function parameters from kvm_init(). That reminds me that other ARCHs may not use 'kvm_x86_ops'-similar thing, so to make thing simple I am fine with your above approach.