On Sat, Oct 30, 2021, Lai Jiangshan wrote: > A small comment in your proposal: I found that KVM_REQ_TLB_FLUSH_CURRENT > and KVM_REQ_TLB_FLUSH_GUEST is to flush "current" vpid only, some special > work needs to be added when switching mmu from L1 to L2 and vice versa: > handle the requests before switching. Oh, yeah, that's this snippet of my pseudo patch, but I didn't provide the kvm_service_pending_tlb_flush_on_nested_transition() implementation so it's not exactly obvious what I intended. The current code handles CURRENT, but not GUEST, the idea is to shove those into a helper that can be shared between nVMX and nSVM. And I believe the "flush" also needs to service KVM_REQ_MMU_SYNC. For L1=>L2 it should be irrelevant/impossible, since L1 can only be unsync if L1 and L2 share an MMU, but the L2=>L1 path could result in a lost sync if something, e.g. an IRQ, prompted a nested VM-Exit before re-entering L2. Let me know if I misunderstood your comment. Thanks! @@ -3361,8 +3358,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, }; u32 failed_index; - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) - kvm_vcpu_flush_tlb_current(vcpu); + kvm_service_pending_tlb_flush_on_nested_transition(vcpu); evaluate_pending_interrupts = exec_controls_get(vmx) & (CPU_BASED_INTR_WINDOW_EXITING | CPU_BASED_NMI_WINDOW_EXITING);