From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> Now we are able to inject interrupts into TDX vcpu, it's ready to block TDX vcpu. Wire up kvm x86 methods for blocking/unblocking vcpu for TDX. To unblock on pending events, request immediate exit methods is also needed. Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> --- arch/x86/kvm/vmx/main.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 07ea7211c633..d743de7b087c 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -311,6 +311,14 @@ static void vt_enable_irq_window(struct kvm_vcpu *vcpu) vmx_enable_irq_window(vcpu); } +static void vt_request_immediate_exit(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + return __kvm_request_immediate_exit(vcpu); + + vmx_request_immediate_exit(vcpu); +} + static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) { if (!is_td(kvm)) @@ -435,7 +443,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .check_intercept = vmx_check_intercept, .handle_exit_irqoff = vmx_handle_exit_irqoff, - .request_immediate_exit = vmx_request_immediate_exit, + .request_immediate_exit = vt_request_immediate_exit, .sched_in = vt_sched_in, -- 2.25.1